Forums
Everyone said "GEO" was a fad. We spent a year building for it anyway.
A year ago, half the marketing world told us "AI search" was overhyped. The other half was shipping "ChatGPT SEO checklists" in a week.
We ignored both.
Instead we did one boring thing: we scraped LLM citations. Every day. Across ChatGPT, Perplexity, Gemini, Claude. For hundreds of brands. And we asked one question when AI recommends a product, where does that recommendation actually come from?
Here's what we found that nobody was talking about:
1 month since launch on ProductHunt
One month ago we launched Contral on Product Hunt and hit #1 Product of the Week. Here's what happened since.
500+ developers downloaded the beta. We didn't expect that number this early honestly. The feedback has been wild, some stuff we expected people to love (the teaching layer) and some stuff we didn't expect at all (Defense Mode became the most talked about feature by far, people genuinely love being quizzed on their own code which was surprising).
We started conversations with a few universities about running Contral as a pilot in their CS programs. The idea of students learning to code inside an actual IDE instead of switching between a tutorial and an editor resonated hard with the professors we spoke to. Nothing signed yet but the conversations are real and moving.
Bug reports have been humbling. Our early users don't hold back and thats exactly what we needed. We've shipped fixes almost daily since launch based on real user feedback. The product today is genuinely better than what we launched with a month ago.
One week after launch: thank you Product Hunt + what Ovren learned
Hey Product Hunt community
It s been a week since we launched Ovren - and I just want to say a genuine thank you.
We built Ovren because every team has backlog work that never makes it into a sprint.
Not more ideas. Not more AI suggestions.
Real engineering work that needs to get shipped.
So we launched Ovren as an AI engineering execution product for real backlog tasks:
AI frontend and backend engineers that work inside your real codebase, execute scoped work, and return reviewable code updates.
Holy shit... I just automated sth I thought was impossible with AI: product tutorial videos
The problem at MindPal was pretty simple: we have hundreds of AI templates to share. We know videos of these templates work - some have gotten us tens of thousands of views. But actually making them was a total nightmare.
We tried everything. At one point, we even hired a freelancer, but the feedback loop was exhausting. It actually took longer to give feedback and wait for revisions than it did to just make the video ourselves. It was slow, expensive, and impossible to scale.
When we did it ourselves, it was a massive grind:
Record the screen of the behind-the-scene agent builder
Record a demo of the agent working
Write a script that didn't sound like a robot
Record a voiceover or an avatar
Spend hours editing everything together
If my co-founder or I were tired or busy, the videos just didn't happen. I assumed this was just the "manual tax" you had to pay for quality.
Last weekend, I got fed up and asked Claude if I could just automate the whole damn thing.
Turns out, I can.
So I spent the weekend cooking something - an internal AI SOP to turn any workflow URL (yes, from just a single URL) into a publish-ready use case video that passes all quality standards in ONE GO.
Here is the new setup:
Playwright: Records the screen and even moves the mouse like a human
@Claude by Anthropic: Writes the narrative based on our actual product info
@HeyGen: Creates the avatar and voiceover
@Remotion: Programs the entire edit - syncing everything into a final file
@Zernio + @Railway: Automatically publishes the video and saves the assets.
Now, I just give the system a URL and a finished video comes out. I don't even have to click "upload."
I just wrote a post sharing the full behind-the-scenes build, the architecture, and the logic behind of this AI video agent. Check it out here if you think this could be helpful for your company: https://mindpal.space/article/ai...
P/s: This is what I wake up to every day now
Will solo startups dominate the business landscape in the future?
Today, this graphic caught my attention:
It featured individuals who managed to build significant profit while running their businesses solo, without employees. Until now, I ve seen these more as exceptions rather than the norm.
At what point does giving AI more access start making it worse?
I ve been testing this with an AI agent we use for outbound workflows.
The agent s job is simple: take a lead, generate a personalized outreach email, and send it.
Before:
The agent only had access to the lead s basic details (name, company, role) and a prompt to write the email.
Output was consistent, clean, and predictable(though the personalisation aspect was limited) .
What we changed:
We gave it more access:
Vote selling on Product Hunt
Every day, after launching, makers are contacted on LinkedIn and X by people offering to sell votes. As the Product Hunt team, we are very much aware of this and really hate it. We have systems in place to neutralize this type of gaming. Every vote counts for a different number of points on Product Hunt. A couple examples:
An account with a recently created gmail address and no history of quality contributions on Product Hunt: this vote will count for 0 points. Yes, this might be a well intentioned user, but we take a conservative approach to protect the community. If the account has a company email or applies for verification on Product Hunt, that's a different story.
An account with a company email address linked to a legitimate LinkedIn account with a history of meaningful contributions on Product Hunt: this vote carries significant weight.
A couple questions for the community:
Are there specific accounts on Product Hunt that you suspect participate in vote selling? You can reply here or email report@producthunt.co
What would you want to see us do differently here?
We let Claude write 100% of our code for 7 days. Here's what broke first.
Last week we did something stupid.
We paused all human coding. Gave Claude (Anthropic) access to our GitHub repo. Told it to build new features, fix bugs, and ship.
No human review. No guardrails. Just Claude and our codebase.
For 7 days, it ran the engineering team.








