Claude source code leak just showed how AI products really work… surprising or expected?
Came across the recent Claude Code leak from Anthropic, and what stood out wasn t the leak itself, but what it revealed about how these systems actually work.
A source map file accidentally exposed ~500k lines of TypeScript
Turns out Claude Code is basically a multi-step prompt orchestration system, not some mysterious black box
-
Includes things like:
layered prompt pipelines ( prompt sandwich )
fake tools to prevent model distillation
simple frustration detection (regex for rage prompts )
Even hints at future features like background agents and persistent memory systems
What s interesting is this:
It kind of confirms that the real product layer in AI isn t just the model it s everything wrapped around it.
Which raises a few questions:
Advice for going open source
Planning to open source Voiden (https://voiden.md/) soon.
This isn t a maybe someday idea anymore, it s a deliberate step we want to take in the coming weeks.
What AI project can I build solo in 30 hours? Need ideas!
Hey!
I want to dive into practical applications of generative AI and have set myself a challenge to develop a useful product in 30 hours of focused work. My goal is not just an experiment but creating something with genuine practical value.
I have basic programming skills and can use any available APIs and tools (GPT-4, Claude, Stable Diffusion, etc.). The ideal project should:
- Solve a real problem
Has anyone built their own CRM instead of using one?
I'm a freelance consultant. Tried Folk, Attio, HubSpot free, Google Sheets. Never stuck with any of them. The problem wasn't the features, it was that I never went back to the tool.
So I built a CRM inside my AI assistant (Claude + MCP server + Supabase). Six contact lists, email drafting, a Chrome extension that scrapes LinkedIn profiles at $0.001 each. Total cost: $10.
The whole thing lives where I already work. That's why I actually use it.
Why good prompting is the #1 skill most AI builders still ignore
I ve been experimenting a lot with AI tools like V0, Lovable, and Bolt.new to build small products and prototypes.
One pattern keeps showing up: most ideas don t fail because the idea is bad. They fail because the prompt is vague, confusing, or incomplete.
AI isn t a mind reader; it does exactly what you ask. If your prompt is fuzzy, your output will be too.
For example, I recently built PublicWall off a single well-structured prompt. Before that, I wasted hours on iterations that were mostly me not clarifying what I actually wanted the AI to do.
Not sure if I'll close the loop yet..
Do any of the more experienced coders ever feel like closing the loop in vibecoding is a bit too much control to let go off? I'm not sure if I can categorize myself as an experienced coder. I'm been mainly self-taught and been doing it for almost a decade now, so I have strong opinions about what I consider best practices. (I actually love refactoring, because it feels like order is once again restored in the codebase).
However, with AI becoming so good at so many parts of the product development, I'm starting to feel like a project manager, or god forbid, scrum master... On the one hand, it's nice that I get to spend most of my time thinking about the problems themselves rather than the implementation of the solution. But on the other hand, I think my coding style is a bit like trying to find a way out of a dark room by running into the wall repeatedly as opposed to planning a way out. And I kind of missing running into the wall to be honest. Now, I'm enviously looking at AI hitting the wall.
2026: X projects in X months - solve for X
Two days ago I saw this thread about how we are having more launches in post-GPT era.
And a question was born in my head: what quantity is optimal now? Of course, you can often see a trend among builders on X, where they launch a project per month, then roughly 4 months later 1 project takes off and we don't see new projects for the next 6 months because the person is busy scaling (and that's ok, testing a hypothesis shouldn't take much time)
But still, what pace should be considered right? 12 in 12 months slow in modern reality. Launch a product in a day? Unrealistic (SEO, ads, app approvals, various settings and optimizations). Theeeeen...48 products a year?
Or should we look at this from another angle, where LLMs allow us to create 12 products in 12 months with more features and better quality? What's the community's opinion?
A little less vibey?
At @UXPin we've just deployed the prompt enhancer for our AI component creator.
From now on, short prompts will be evaluated and refined if the AI considers them too weak. This aims to improve the AI output - with prototypes returned by the AI more detailed and diverse.
Example and output below:
Original prompt: 'news portal'
Refined prompt:
Is “local-first development” finally ready for serious production apps in 2026?
Over the last year, I ve been experimenting more with local-first patterns , apps that prioritize offline functionality and sync later instead of depending on constant server calls.
What used to feel experimental now feels surprisingly stable. Faster UI, fewer loading states, and a smoother user experience overall.
I rebuilt a small side project recently with a local-first approach, and the difference in responsiveness was noticeable. But it also introduced new challenges around conflict resolution and state consistency.
It makes me wonder:
Testing APIs. Biggest Problems?
Hey there,
What are the biggest issues/problems you currently have with building and Testing APIs with existing tools like Postman, Insomnia etc?
