I don't actually like using the term "vibe coding". We've been software developers for over a decade ,are not one-shotting features, and have a very opinionated and strict dev process.
I work at an early stage startup and I'd estimate 70-80% of our codebase is vibe coded (510k lines). To be clear, it's not 1 shot "build this feature." More like, "implement get_slim_documents for Jira in the exact same way we did it for the Confluence connector." Comfort with AI coding tools is actually something we gauge during interviews/work trials. Looking at our peer companies, it's exactly the same. My hypothesis/assertion is that companies founded ~2022+ are fundamentally intertwined with "vibe coding." In 5 years, programming will connote vibe coding more than it will connote non-AI assisted work. Am I crazy? Pigeon-holed in the SF startup world? Naive? Would love to hear more thoughts/diverse perspectives on this.
So I vibecoded Worktagg a simple, AI-driven platform that drops a real case study, expert takeaways and a quick quiz into your inbox every day. The goal is to turn case prep into a small daily habit instead of a giant cram session, and to give anyone the same quality of insights you d get inside a top consulting firm.
It s built to be lightweight, practical and actually enjoyable to use I use it everyday since I love solving this cases. In case you wanna try!
Airbolt lets you securely call LLM APIs with zero backend. Just add our client SDK to your app and start making inference calls with best practices built in.
I just wanted to take 10 seconds to introduce UItoVIBE.com to the vibe coding community. I built it to help vibe coders get a nice looking first version of apps that they are making.
After rebuilding the same project three times because AI forgot my architecture, I got fed up and built @CodeRide (Beta) with my team.
The problem: AI code assistants lose track of your project between sessions. Every time I start coding with Cursor, Claude, or any AI assistant, I waste time re-explaining my codebase structure, architectural decisions, and coding patterns.
What we built: The project management tool for coding agents using MCP. Upload your project documentation or PRD, and CodeRide breaks it into optimized, fully contextual tasks ready for your AI agent.
I recently switched main agents from Claude Code to Codex, and wow, the code quality feels way higher. But I'm noticing that Codex doesn't explain its decisions or reasoning as clearly as other models. Is it just me, or does Codex skip the 'why' behind the code more often?
Are there tricks to codex out more reasoning? Curious if anyone else has noticed this or found a good workaround.
AI definitely has come a long way, progressed beautifully in front-end, but when it comes to fixing bad backend code, it still struggles a lot. The complexity often requires human intuition and deep understanding. It can help spot some issues or suggest improvements, but relying solely on AI to untangle messy backend problems often falls short. Has anyone had surprisingly good or bad experiences with AI in this area? I'd love some advice (or you're free to rant)
One for chatbot templates in 2018 that was used by 700+ marketing agencies.
Another in 2020 for job seekers in the U.S., which reached 5.5M users.
So let s suppose I have some experience. You can read about me on Bootstrappers, TechCrunch, Dev to.
Now I m considering building a marketplace where creators can list their vibe-coded projects along with the code, a live demo link, etc. The idea is to target: