Launching today
OpenMark

OpenMark

Benchmark AI models for YOUR use case

3 followers

Test ~100 AI models against YOUR specific prompts. Get deterministic scores, real API costs, and stability metrics. Built this after discovering the "best" model for my RAG pipeline was a model that performed better AND cost 10x less. No LLM-as-judge. No voting. Just reproducible results for your actual use case. • 18 scoring modes • Real cost/efficiency calculations from API pricing • Vision & document support • Beginner-friendly yet capable of deep, complex use. Free tier available
OpenMark gallery image
OpenMark gallery image
OpenMark gallery image
Free Options
Launch Team / Built With
Wispr Flow: Dictation That Works Everywhere
Wispr Flow: Dictation That Works Everywhere
Stop typing. Start speaking. 4x faster.
Promoted

What do you think? …

Marc Kean Paker

Hey all, thanks for checking this out.

About 8 months ago I was building a RAG pipeline and needed to choose an LLM for a specific use case (semantic similarity).
When I tested models against that task, a non-flagship model turned out to be faster, more accurate for the job, and much cheaper than the model I originally planned to use. I was about to spend ~10× more on API costs for worse results.

That’s what led to OpenMark.ai.

The idea is simple: stop trusting generic benchmarks. Benchmark models using your exact task, prompts, and constraints, and get deterministic, reproducible results.

What I focused on:

* Deterministic scoring (no LLM-as-judge, no voting)
* Real API cost & efficiency metrics
* Stability scores
* ~100 models in one place

I’m launching this solo, so I’d genuinely love feedback. What would make this useful for you?