
Hot100.ai
The weekly AI project chart judged by AI
165 followers
The weekly AI project chart judged by AI
165 followers
The weekly chart for AI-powered projects — judged by Flambo. Discover standout apps built with Cursor, Bolt, v0, Replit, Lovable, Claude Code and more... Flambo scores every submission based on innovation and utility New rankings every Monday.









Hot100.ai
Reflag
Congrats on the launch @darlingdash ! Love the focus on scoring projects based on quality rather than largest network of upvoters 💪
Hot100.ai
@makwarth Thanks — really appreciate that. And yeah, we’re launching on Product Hunt of all places, which isn’t lost on me 😄 PH is a great place to share new work and its the OG.. We want the chart to reflect quality and originality, not just who can rally the biggest crowd on launch day and who their network is.
We know from experience the work that goes into GTM and all the things happening behind the scenes to help ideas and products get eyeballs. Vibe Coding and all of these new tools has kicked the door down in terms of who can make their ideas come to life now. Time for something new.
I wanted with this approach to try and level that field for the new generation.
That’s the idea.
RewriteBar
Great idea. How do you control if a project was built with one of the AI builders?
Hot100.ai
@m91michel Hi Mathias, thanks for the question. We don’t do strict verification. When someone submits, they share their stack (Cursor, v0, Replit etc.) along with the meta data in the submission form. What I've seen is a real quality in the info builders share at this stage. This helps.
Flambo looks at the project itself and the submission context, so if the story, the stack, and the end result don’t line up, it just won’t score strongly. Thats one part. It’s more about consistency and craft signals.
And on top of that, I personally review every project right now. So there is still a human eye making sure the vibe stays genuine and the work feels real. No gatekeeping, just making sure good craft gets seen.
Does that answer your question?
Playmaker
Congrats on the launch!
Can you give some more detail on how Flambo scores each project? "Innovation" and "Utility" can be interpreted quite broadly so I am curious to know what goes on behind the curtain!
Hot100.ai
@neilswmurray Flambo runs on gpt-4o-mini with a low temperature (0.25), so the scoring stays consistent.
Every project gets evaluated with the same structured prompt.
It looks at a few things:
what the project does and how it’s described
the problem it’s solving
the tools used to build it
and whether there’s a live product to try
It doesn’t dig through repos or do anything heavy like that. It’s judging based on what’s submitted and how well the story and the end result line up. The scoring is two parts:
Innovation — is this bringing something new or interesting?
Utility — is it actually useful, clear in purpose, and understandable?
Both are scored on a 1.0–10.0 scale.
There are also some light adjustments. For example:
small bonus if it’s live and easy to try
Also bonus if the project has been security checked by the builder.
small penalty if it’s just a waitlist or extremely vague
Final score is simply the average of Innovation and Utility, rounded to one decimal place.
For the chart, Flambo’s score is the main signal. Human votes are there too, but they act more like momentum than the deciding factor. But they can and will swing scores that are tied etc, tbh I've tweaked the scoring model a few times in Beta. Expect to keep doing that when appropriate.
And for now, I’m still reviewing every project myself. Just keeping an eye on quality and making sure the whole thing feels right as it grows.
Appreciate the question!
Playmaker
Love seeing a fresh take on ranking and discovery, we have so many directories that it’s easy to lose sight of what actually adds value.
I like the idea of scoring projects based on utility, seems like a no brainer but at the same time hard to pull it off?
Also, would it be fair to assume certain models or tools might get more upvotes because their have a bigger brand presence in the AI space? Would love to know more on what behind the scoring system for sure.
Hot100.ai
@rvlt_tv Scoring on utility sounds obvious, but you’re right, its tricky. The way we’ve handled it is to break it down into simple, consistent checks. Flambo isn’t trying to diagnose code deeply or judge “taste.”
It just looks at what the thing does, how clearly the problem is defined, and whether the execution matches the intent. In the future, you could imagine further work being done to 'prove the utility', but not yet.
On the brand/tool bias question — yes, big names absolutely create gravity. What ive seen so far is that there are the foundational models / 'big 4' that are leading the way in terms of uptake, more projects/builders mention OpenAi i.e, obvs @Lovable is super popular of course but doesn't actually feature a lot across our site. Its early days, what I'm seeing is that its become a 'Stack Sport' and I think thats still playing out.
Thoughtful question, cheers, Helder.
Good stuff. Best of luck with the launch!
Thanks for sharing your learnings about the platform and your methods at AI Meetup Copenhagen last month ⚡
I don't have time to test hundreds of tools myself so the the idea of publishing usage data and metrics about which IDEs, models, methods and frameworks are gaining traction is what's most interesting to me.
Those kinds of real-world signals are valuable for anyone trying to understand where the market is going - and probably also VCs and other types of investors.
Hot100.ai
@martin_schultz totally! I think that data insight is valuable and interesting. Understanding how consumers build with these tools, its all a new data set, somewhat. We've had 400+ project submitted to date and there are patterns emerging. I've got an
and the plan is to send weekly newsletters with some of this telemetry. I did a little bit of analysis last week and pulled a couple of slides together, which I can share here. Thanks again for the invite to the Meetup - happy to come back and talk about how the launch goes etc when there is a spot : )
Looks great - super interesting to see how this (& AI tooling space) will evolve - it's moving fast!
Also as the judging is AI powered, should hot100 be on the hot100 list too - or is that too meta? Curious - How does it rank against its own criteria - you must have tested it!
Hot100.ai
@soopert Ah, Tom : ) There's a question. I actually have not tested that, in total honesty. But I will endeavour to run that test for you. It has been interesting tweaking things along the way to arrive at a tone of voice (and scorecard) that feels helpful, constructive, but also has a pov on what scores highly. Agreed, moving super fast, exciting times.