We Tanked our Product Hunt Launch on Purpose
There are a hundred posts about how to succeed on Product Hunt.
This is about how to not fail.
Our latest Product Hunt launch was a disaster. Just as we had hoped.
Here's what happened:
We built Yo—a Figma plugin that creates AI personas to give you instant feedback on your designs.
Before launching, we did something meta: we used Yo to test our own launch materials.
The AI personas (modeled on typical PH users) said nice things to start.
Then I asked: "If you saw this launch on Product Hunt, what's missing that would stop you from upvoting?"
Suddenly, brutal honesty:
"I need to see a demo video showcasing how this actually works in Figma"
"Show me actual user testimonials or concrete examples"
"The description is clear, but I need evidence of value—data or use cases"
"A quick demo of the AI interviews in action would seal the deal"
We had a choice: fix everything... or launch as-is and see if real PH users would echo the exact same things.
The Experiment:
We deliberately broke every PH rule:
❌ Launched 15 min before site opened
❌ Zero network outreach
❌ No video for first hour
❌ When we added video, opening frame was a static slide
Result: First 4 hours: 10 upvotes. No feature. Dead in the water.
Then we reached out to our network and posted a forum article.
Even stranger result: 30 upvotes on forum post, but only 6-8 translated into actual launch upvotes.
The Payoff:
Our AI personas were eerily accurate.
Real PH users responded exactly as predicted. The lack of engaging video hurt us. The unclear value prop confused people. The missing social proof made it feel risky.
We validated that our product actually works—by deliberately failing in the exact ways our AI personas predicted we would.
Your Turn:
Want to see what AI personas would tear apart in your launch?
→ Try it out on this Playground file I made in for Figma
It's pre-loaded to generate PH user personas. Plug in your launch page and ask the hard questions:
"Why would you scroll past this?"
"What's confusing here?"
"Why wouldn't you upvote?"
Or drop your PH URL below and I'll run it through the personas for you.
Sometimes you learn more from a controlled burn than a wildfire success.


Replies
Cal ID
That's simply genuis.
IXORD
That’s quite an interesting approach to demonstrate that your product works :)
Jo
@ixordA bit imperfect, but we thought it would be a good experiment to run. Post this, we are actually finding that there seems to be more valid feedback when asking AI to critique rather than tell you what's good/working.
That’s a bold way to test both the product and the audience. I like the honesty here.
Nice approach, thanks for sharing this with us! What's your next move now on PH?
Jo
@andreitudor14 Great Q! We're building on the learning here for the product - while it's great to get AI feedback on a single screen, we think it's going to be more useful if we were able to get the same personas to A/B test options. We'll come back and launch the feature on PH in a few weeks once we've got it to a good place - might be an opportunity to also test how strong our personas are in picking winning versions.
Perhaps an interesting way to do this would be to swap out creative / copy in the middle of the day and track responses to both and see how they match to our persona-generated suggestions 🤔
MultiDrive
Thank you for sharing. Mistakes help us learn.
Atlas
that's clever!!!
@ragsontherocks This kind of thing happens. I probably made a few of the same mistakes myself launching here today my platform.
Experiments like this are valuable, they’re recognised as high signal, low ego - and they teach you loads on engagement.
Progress always comes from running simulations, observing edge cases, and listening to the uncomfortable feedback. That’s how you build something that actually works.
Triforce Todos
Do you think this approach only works for PH-style launches, or could it predict reactions on LinkedIn/Twitter, too?
Jo
@abod_rehman Interesting - haven't actually tested this on X/LI.. We take a persona-based approach which provides more qualitative feedback from 3-5 personas. I'd suspect using more survey-type synthetic populations would work better for feedback on X.
this is brilliant and honestly kinda hilarious 😅 launching just to see if your AI personas were right… love it.