Hot100.ai

Hot100.ai

The weekly AI project chart judged by AI

165 followers

The weekly chart for AI-powered projects — judged by Flambo. Discover standout apps built with Cursor, Bolt, v0, Replit, Lovable, Claude Code and more... Flambo scores every submission based on innovation and utility New rankings every Monday.
Hot100.ai gallery image
Hot100.ai gallery image
Hot100.ai gallery image
Hot100.ai gallery image
Hot100.ai gallery image
Free
Launch Team / Built With
OS Ninja
OS Ninja
Explore and Learn Open Source using AI
Promoted

What do you think? …

Tim Allison
Hey Product Hunt. I’m Tim, the builder behind Hot100.ai. I’ve been back building again these past months, and this project has been a genuine labor of love and learning. Like many of you, I’ve been immersed in the wave of AI-building through tools like Cursor, v0, Bolt, Replit, Lovable and Claude Code. There is a new working style forming. A new vibe around making things. Hot100 came from wanting to surface and celebrate that. To see what is actually being made right now, and to learn from it. When I launched Plane here 9 years ago, Product Hunt was one of the few meaningful places to share a product. Today there are countless directories. Many follow the same model: vote counts, SEO loops, domain rating games. It has become harder to know where to find what is genuinely good. Popularity does not equal quality. Nothing felt like the definitive chart for AI-built projects. So I built Hot100 to explore what a merit-based ranking could look like. Hot100 uses AI to score AI projects. I built an AI Judge, named Flambo, that evaluates submissions based on innovation, usefulness, and execution. Human votes still matter, but Flambo’s score is the anchor. The idea is simple: good work should be able to surface, even without an audience. Over 400 projects have already come through the beta this summer, and it has been interesting to see which tools and workflows are emerging as the new default stack. We are starting to publish some of that telemetry: which IDEs are rising, which models are becoming common, and which frameworks are quietly winning. If you enjoy that kind of signal, there is a weekly email. I spent close to 9 years at Zendesk working on design at scale. This project has been different. Just me in Replit, hands on with every design detail. If you are building, I'd love to see what you are working on. Submit a project, see where it lands, maybe hit the chart next Monday. And if you are just curious, explore the current Hot100. There is a lot of great work happening. I will be here all day for questions and conversation. Thanks for being here. – Tim
Rasmus Makwarth

Congrats on the launch @darlingdash ! Love the focus on scoring projects based on quality rather than largest network of upvoters 💪

Tim Allison

@makwarth Thanks — really appreciate that. And yeah, we’re launching on Product Hunt of all places, which isn’t lost on me 😄 PH is a great place to share new work and its the OG.. We want the chart to reflect quality and originality, not just who can rally the biggest crowd on launch day and who their network is.

We know from experience the work that goes into GTM and all the things happening behind the scenes to help ideas and products get eyeballs. Vibe Coding and all of these new tools has kicked the door down in terms of who can make their ideas come to life now. Time for something new.

I wanted with this approach to try and level that field for the new generation.

That’s the idea.

Mathias Michel

Great idea. How do you control if a project was built with one of the AI builders?

Tim Allison

@m91michel Hi Mathias, thanks for the question. We don’t do strict verification. When someone submits, they share their stack (Cursor, v0, Replit etc.) along with the meta data in the submission form. What I've seen is a real quality in the info builders share at this stage. This helps.

Flambo looks at the project itself and the submission context, so if the story, the stack, and the end result don’t line up, it just won’t score strongly. Thats one part. It’s more about consistency and craft signals.

And on top of that, I personally review every project right now. So there is still a human eye making sure the vibe stays genuine and the work feels real. No gatekeeping, just making sure good craft gets seen.

Does that answer your question?

Neil S W Murray

Congrats on the launch!

Can you give some more detail on how Flambo scores each project? "Innovation" and "Utility" can be interpreted quite broadly so I am curious to know what goes on behind the curtain!

Tim Allison

@neilswmurray Flambo runs on gpt-4o-mini with a low temperature (0.25), so the scoring stays consistent.

Every project gets evaluated with the same structured prompt.

It looks at a few things:

  • what the project does and how it’s described

  • the problem it’s solving

  • the tools used to build it

  • and whether there’s a live product to try

It doesn’t dig through repos or do anything heavy like that. It’s judging based on what’s submitted and how well the story and the end result line up. The scoring is two parts:

  • Innovation — is this bringing something new or interesting?

  • Utility — is it actually useful, clear in purpose, and understandable?

Both are scored on a 1.0–10.0 scale.

There are also some light adjustments. For example:

  • small bonus if it’s live and easy to try
    Also bonus if the project has been security checked by the builder.

  • small penalty if it’s just a waitlist or extremely vague

Final score is simply the average of Innovation and Utility, rounded to one decimal place.

For the chart, Flambo’s score is the main signal. Human votes are there too, but they act more like momentum than the deciding factor. But they can and will swing scores that are tied etc, tbh I've tweaked the scoring model a few times in Beta. Expect to keep doing that when appropriate.

And for now, I’m still reviewing every project myself. Just keeping an eye on quality and making sure the whole thing feels right as it grows.

Appreciate the question!

Helder Almeida

Love seeing a fresh take on ranking and discovery, we have so many directories that it’s easy to lose sight of what actually adds value.

I like the idea of scoring projects based on utility, seems like a no brainer but at the same time hard to pull it off?

Also, would it be fair to assume certain models or tools might get more upvotes because their have a bigger brand presence in the AI space? Would love to know more on what behind the scoring system for sure.

Tim Allison

@rvlt_tv Scoring on utility sounds obvious, but you’re right, its tricky. The way we’ve handled it is to break it down into simple, consistent checks. Flambo isn’t trying to diagnose code deeply or judge “taste.”
It just looks at what the thing does, how clearly the problem is defined, and whether the execution matches the intent. In the future, you could imagine further work being done to 'prove the utility', but not yet.

On the brand/tool bias question — yes, big names absolutely create gravity. What ive seen so far is that there are the foundational models / 'big 4' that are leading the way in terms of uptake, more projects/builders mention OpenAi i.e, obvs @Lovable is super popular of course but doesn't actually feature a lot across our site. Its early days, what I'm seeing is that its become a 'Stack Sport' and I think thats still playing out.

Thoughtful question, cheers, Helder.

Martin Schultz
💡 Bright idea

Good stuff. Best of luck with the launch!

Thanks for sharing your learnings about the platform and your methods at AI Meetup Copenhagen last month ⚡

I don't have time to test hundreds of tools myself so the the idea of publishing usage data and metrics about which IDEs, models, methods and frameworks are gaining traction is what's most interesting to me.

Those kinds of real-world signals are valuable for anyone trying to understand where the market is going - and probably also VCs and other types of investors.

Tim Allison

@martin_schultz totally! I think that data insight is valuable and interesting. Understanding how consumers build with these tools, its all a new data set, somewhat. We've had 400+ project submitted to date and there are patterns emerging. I've got an

and the plan is to send weekly newsletters with some of this telemetry. I did a little bit of analysis last week and pulled a couple of slides together, which I can share here. Thanks again for the invite to the Meetup - happy to come back and talk about how the launch goes etc when there is a spot : )

Tom Stenson

Looks great - super interesting to see how this (& AI tooling space) will evolve - it's moving fast!

Also as the judging is AI powered, should hot100 be on the hot100 list too - or is that too meta? Curious - How does it rank against its own criteria - you must have tested it!

Tim Allison

@soopert Ah, Tom : ) There's a question. I actually have not tested that, in total honesty. But I will endeavour to run that test for you. It has been interesting tweaking things along the way to arrive at a tone of voice (and scorecard) that feels helpful, constructive, but also has a pov on what scores highly. Agreed, moving super fast, exciting times.

12
Next
Last