Every day, after launching, makers are contacted on LinkedIn and X by people offering to sell votes. As the Product Hunt team, we are very much aware of this and really hate it. We have systems in place to neutralize this type of gaming. Every vote counts for a different number of points on Product Hunt. A couple examples:
An account with a recently created gmail address and no history of quality contributions on Product Hunt: this vote will count for 0 points. Yes, this might be a well intentioned user, but we take a conservative approach to protect the community. If the account has a company email or applies for verification on Product Hunt, that's a different story.
An account with a company email address linked to a legitimate LinkedIn account with a history of meaningful contributions on Product Hunt: this vote carries significant weight.
A couple questions for the community:
Are there specific accounts on Product Hunt that you suspect participate in vote selling? You can reply here or email report@producthunt.co
What would you want to see us do differently here?
The news dropped yesterday: OpenAI is shutting down Sora, their AI video app, six months after launch. The Disney $1B deal is off, and the API is going away, too.
The arc is fascinating if you zoom out. The app launched in September 2025, hit the top of the App Store within a day, and reached 1M downloads faster than ChatGPT did. By January, downloads had dropped 45%, and the whole thing had made roughly $2.1M in in-app purchases over its lifetime.
Last week Garry Tan (CEO of Y Combinator) shared his entire Claude Code setup on GitHub and called it "god mode."
He's sleeping 4 hours a night. Running 10 AI workers across 3 projects simultaneously. And openly saying he rebuilt a startup that once took $10M and 10 people. Alone, with agents.
The year is almost done, and I have started to be curious about what could be stopping companies from adopting AI for their customer support services?? From my experience, I can tell - fear, fear of losing control. Support leaders worry that AI will say the wrong thing, sound off-brand, frustrate customers, or create more cleanup work for the team. Until they see that an AI agent can learn from their own knowledge base, follow rules, escalate when needed, and stay accurate, they hesitate. Once they realize it can actually reduce workload without breaking trust, adoption becomes much easier. What do you think? Do you agree with me?