Michael Ludden

cto bench - The ground truth code agent benchmark

byโ€ข
Most AI benchmarks are built backwards. Someone sits down, dreams up hard problems, and then measures how well agents solve them. The results are interesting, sure. But they don't always tell you what matters: how agents perform on the actual work that's sitting in your queue. That's why we built cto.bench. Instead of hypothetical tasks, we're building our benchmark from real work. Every data point on cto bench comes directly from how cto.new users are actually using our platform.

Add a comment

Replies

Best
Michael Ludden
Maker
๐Ÿ“Œ
I'm excited to share cto bench is live. This is a benchmarking tool that tests against real world usage of the latest and greatest frontier models by cto.new users. Many benchmarking tools run LLMs through custom suites to test viability, but cto bench uses actual usage patterns and PR merge rates to verify how well models are performing on actual tasks. We hope this ads valuable, practical data points to the LLM benchmarking space as it evolves.
Maklyen May

Finally, a benchmark that measures usefulness instead of academic cleverness. This feels much closer to how teams actually decide whether an agent is worth adopting.

Michael Ludden

@maklyen_mayย thanks! Interesting that OSS models are so high up the list for practical use, eh?

Anton Loss

Wow, this is amazing! All the best models for free! ๐Ÿš€

How can this be sustainable for you?

Michael Ludden

@avlossย great question! We're still working on that. What would you recommend?

Anton Loss

@michael_luddenย 

Some ideas:

  • Provide additional services for fee, like Domain, Hosting, Monitoring, Promotion / Ads, Databases.

  • Charge for organisational use any/or for dedicated deployment.

  • Charge for additional features, like a human reviewing and solving a problem in case LLM is stuck.

  • Use collected data to train proprietary models, then sell those.

Michael Ludden

@avlossย love it! ๐Ÿ™

ElevenApril

This is a really refreshing take on benchmarks ๐Ÿ‘€

Grounding it in real work instead of synthetic tasks feels way more honest โ€” as a builder, thatโ€™s the kind of signal I actually trust. Love the โ€œbuilt from usageโ€ philosophy. Congrats on the launch! ๐Ÿš€

Curious how youโ€™re thinking about bias over time โ€” do you plan to balance workloads or surface context around where the data comes from?

Michael Ludden

@elevenaprilย can you expand on the question a bit more? Not sure what you're asking.

Mykyta Semenov ๐Ÿ‡บ๐Ÿ‡ฆ๐Ÿ‡ณ๐Ÿ‡ฑ

Awesome! Very useful!