We re trying something new on Thursday: Alpha Day.
The idea is simple. If this is the first time you re launching your product anywhere, you can tag it alpha and get a boost to your points (and land on a special leaderboard).
I keep hearing and reading about how programmers are at risk; basically, everything that can be replaced by AI is at risk.
Yesterday, Lenny Rachitsky shared a post that PM openings are at the highest levels since 2022.
At the same time, I read how big giants (Meta, Amazon, etc.) are laying off engineers because of AI, and then I read about how they had to hire back again because something managed by AI went wrong.
In the summer, one founder of a VC-backed startup approached me to manage his LinkedIn profile, through which he acquires clients (personal brand building).
It was a classic job interview, where the assumption is to create a conversion (you are active on someone's account, building their personal brand, as the account grows, people are noticing you, write to you, you arrange a call, and maybe close a sale)
I asked if there was a possibility of getting equity in this position, because the other positions they had advertised (whether tech, GTM, sales, some small percentage of equity) did offer even a small %...
The answer was "No, this position does not include equity."
Lovable hit $400M ARR with 146 employees. That's $2.7M revenue per employee. Midjourney goes even further. $500M revenue. ~110 employees. $0 raised from investors. That's over $4.5M per employee. Bootstrapped. For context: most SaaS companies celebrate $200k-$300k per employee as a strong benchmark.
If 146 people can generate $400M, what does the math look like at 10?
Laravel just shipped a new laravel.com with a bold headline:
The clean stack for Artisans and agents.
Laravel has always been opinionated and seems like a solid option in this AI era. Has anyone made the switch since they started working with coding agents?
Last month, I did something that felt slightly insane.
I took our product description, fed it into ChatGPT, and asked it to build a competitor. Not a parody. A real competitor. Better features, better positioning, better everything. I told it to be ruthless.
It did!
The output was polished. Confident. Structured like a real go-to-market plan. It named features we don t have. It positioned itself against us. It looked like a threat on paper.
In a discussion forum with @monatruong_murror , we talked about how AI can help us learn things that aren t naturally familiar to us, like programming.
The biggest challenge was/is: Getting AI to guide you toward a solution, instead of just giving you the answer.
Last week Garry Tan (CEO of Y Combinator) shared his entire Claude Code setup on GitHub and called it "god mode."
He's sleeping 4 hours a night. Running 10 AI workers across 3 projects simultaneously. And openly saying he rebuilt a startup that once took $10M and 10 people. Alone, with agents.
Here's the painful truth: Your site is translated. Your AI visibility isn't.
When a German user asks ChatGPT in German about your category, you're invisible. When a French prospect searches Perplexity in French, your competitor shows up.
In the last week, I was restricted twice on LinkedIn (where I have a community of more than 8k+ people) (the first time for 48 hours, the second time for 72 hours).
The Perplexity CTO announced at their developer conference this week that they're moving away from MCP internally. Garry Tan tweeted "MCP sucks honestly." Pieter Levels called it useless. The "MCP is dead, long live the CLI" post hit the top of Hacker News. OpenClaw, the hottest open-source agent project in the world, deliberately chose not to support it.
The argument: MCP tool definitions eat your context window. Auth is clunky. The whole thing is an unnecessary abstraction over APIs that already exist. LLMs are smart enough to call APIs directly, or use CLIs, or write their own integration code. Why add a protocol layer?
I'm Mason, and I built BugStack as an internal tool for my other startup, FuelScout. I was a solo founder running a product with real users, and production errors kept hitting while I was away. By the time I caught them, users had already churned.
The fix was almost always a few lines of code. So I asked myself, if I can read a stack trace, pull the relevant files, and write a fix, why can't an AI agent do the same thing end to end?