Last week Garry Tan (CEO of Y Combinator) shared his entire Claude Code setup on GitHub and called it "god mode."
He's sleeping 4 hours a night. Running 10 AI workers across 3 projects simultaneously. And openly saying he rebuilt a startup that once took $10M and 10 people. Alone, with agents.
I recently saw a marketer with 10k+ followers launch and finish 6th with 348 upvotes. They followed a proper pre-launch and post-launch plan, did everything right, and still the outcome felt unpredictable.
Now I m launching @Curatora next week.
I m not a marketer. I have a little over 1k followers. Of course, asking for support helps. But I also keep hearing that a large part of the Product Hunt community shows up mainly for their own launch, then goes quiet until the next one.
That makes me wonder: how much of success here is strategy, and how much is timing and network effect?
AI still makes mistakes when coding. However, for simple fixes or features do you bother switching branches and testing locally before creating a PR or pushing to production? Or do you just ask Claude for a fix, review quickly, then push? I saw an interview with Peter Steinberger (creator of Openclaw). Where he mentions he always pushes to main and almost entirely vibe codes. If you look at his contributions, you see how fast he ships. Do devs need to be more trusting?
Lums hit #11 organically on Product Hunt yesterday! A massive thank you to everyone who supported us by upvoting, commenting, and downloading the app.
The launch conversations highlighted one thing we ve felt since day one: most budgeting apps ask for way too much effort before giving you any value.
Usually, you download an app and spend the next hour fixing categories, adjusting settings, and correcting transactions. By the time you're "set up," you've already lost the motivation that made you download it in the first place.
@yulia_kuznetsova3 put it perfectly! She said she added her accounts to Lums and it just showed her where her money was going. No fixing things first. Just clarity. @selina4 shared something similar. After months of bills piling up and small charges slipping by, having everything side-by-side finally made things click.
This debate often gets framed as Should researchers use AI for literature reviews?
I think the real question is different.
Is it ethical to spend hundreds of researcher hours on mechanical work when that time could be spent advancing actual knowledge?
Think about a researcher spending an entire weekend searching papers, skimming irrelevant abstracts, copying citations, and fixing references. That s not insight or discovery. That s overhead.
What s worked for us looks very different from spray-and-pray.
We ve learned that outbound works when it s intentional at every step.
A few things that made the biggest difference for us:
Getting the ICP really right. Sometimes the first outreach isn t to the buyer, but to someone who can open the door. Personalization isn t optional. Company context, role, recent updates. Generic gets ignored fast. Channels are chosen by output, not comfort. We double down on what actually converts. The first message rarely works. Conversations usually start around the third or fourth touch, if there s value each time. Timing matters more than volume. Funding news, hiring, social posts. Showing up when the problem is top of mind changes everything. We focus on relationships, not just pipeline. Some buy later. Some refer. All conversations compound. Context before calls helps. If someone engages multiple times, the conversation feels very different. Signals matter. Engagement often tells you when to reach out, not just who.