If you re still sitting on your launch, this is the push.
YC made a special exception for this community: one or more companies that launch tomorrow will get a YC interview and potentially funding. A YC partner will review every eligible launch.
hey PH builders! I m Murtuza Ali, currently in my final year of engineering. I m a builder. I really enjoy building products, especially the ones that can create some real impact.
I ve been building since my second year of college and I ve tried a lot of ideas, AI tools, developer products and different kinds of systems. Most of them never got properly launched because I was always experimenting, learning and improving, but not really shipping publicly. I ve been following the AI wave since the early boom days and I use AI tools almost every day. I really enjoy AI assisted coding, it feels powerful and changes the way you think while building.
But while building with AI, I kept running into the same issue again and again, hallucinations. Not big dramatic failures, but small inconsistencies that slowly affect product building & trust.
You start building something and it works in the beginning, you feel excited, then small issues show up. The output becomes slightly unreliable, the system behaves differently than expected and slowly you lose momentum at the idea stage itself. I think a lot of AI coding tools feel like this right now.
I keep seeing the same pattern across early-stage teams:
the MVP works until it really doesn t.
For many founders, the hardest part isn t getting something online it s everything that comes after: infra that cracks under real users code that no dev wants to touch rewriting the whole stack AI-built projects no one can maintain the moment you realize your prototype isn t a product
I keep seeing the same pattern across early-stage teams:
the MVP works until it really doesn t.
For many founders, the hardest part isn t getting something online it s everything that comes after: infra that cracks under real users code that no dev wants to touch rewriting the whole stack AI-built projects no one can maintain the moment you realize your prototype isn t a product
I keep running into the same pattern with AI coding tools: I type a quick starter prompt, get something that looks promising for a moment, and then, inevitably, it collapses into messy code and outputs I never wanted in the first place.
I ve seen this happen to others too. The tool isn t the problem. The problem is the prompt. Or rather, the lack of structure, clarity, and intention behind it.
So I m curious:
How do you plan your prompts when working with AI for code generation? How much context and detail do you include up front? Do you start small and iterate, or do you specify the entire mental model before generating anything? What habits or prompting frameworks have actually helped you get clean, reliable code?
I ve noticed that my workflow has changed completely over the last year. I rarely start a new project with a blank file anymore. Instead, I pick a template, reuse snippets, or let an AI helper suggest the structure and then I just vibe my way through the build.
It s faster, but sometimes I miss the old blank screen energy, when every line felt handcrafted.
AI dev tools are evolving crazy fast , every few weeks there s a new must-try for vibe coders.
Some people are building full products with @ChatGPT by OpenAI and @Replit , others swear by @Cursor and @Claude by Anthropic , and a few are mixing @Lovable + @v0 by Vercel + @bolt.new to ship apps in record time.
I ve been refining my own vibe stack lately, trying to find that sweet spot between speed, control, and creativity. It made me wonder ,what does your setup look like right now?