Hey everyone! With the landscape for building voice agents shifting lately, it feels like we re moving away from heavy, manual API orchestration toward something more streamlined.
How you re currently architecting voice agents. Specifically: Have you used the Model Context Protocol (MCP) to build or provide real-time data/context to your voice agents? Does it actually streamline your tool-calling, or is it more trouble than it's worth?
Would love to hear what's working (and what's breaking) in your current workflow. Drop your thoughts below!
I don't have a CS degree. Never shipped a product. Never started a company. One month ago I didn't know what a Next.js route was.
I built Four-Leaf.ai, an AI career prep platform with voice mock interviews, resume tailoring, and negotiation coaching. It's live, it has users, and I launched it on Product Hunt today.
If you're on a GLP-1 (Ozempic, Wegovy, Zepbound, etc) and figuring out what food to buy/eat is absolutely confusing, we'd love to have you as a beta user on our new app. If you're not on a GLP-1, but you have a health goal (ie: eat more protein, more fiber, less sugar, etc) we'd love you as beta users, too! Drop a comment if you want to be added to the Testflight beta group. Beta testers who submit feedback get free access to the app for an entire year :)
Quick one. I've been building software for over 20 years, and I've never done a seasonal discount before. But we just passed 40 free users, and I wanted to give people a reason to jump in this weekend.
The offer:
- Monthly plan locked in at $10/month (normally $29/month)
- That's 66% off, and the rate stays for as long as your subscription is active
I genuinely love listening to podcasts. It's one of the best ways I've found to stay on top of new trends, pick up strategies I wouldn't have discovered otherwise, and come across founders and operators I'd never stumble on through regular reading.
So I'm always on the lookout for new ones worth adding to the rotation.