Build a voice AI agent in minutes, straight from your terminal. One command scaffolds your project with built-in tunneling, sample backends, and global edge deployment. Connect via webhook, use existing agent logic, pay only for speech (silence is free).
Anyone building voice AI agents knows how hard it is to stay up-to-date with the latest text-to-speech voice models.
We spend time testing and experimenting with all of the available paid and open-source text-to-speech voice AI models and consolidated our own notes and experience testing different models into a single guide for developers evaluating multiple models.
We're excited to announce a major upgrade to Layercode: support for Deepgram Flux: the world's first transcription system built specifically for voice agents.
I’m Damien, CEO & co-founder of Layercode. I’ve spent the last 20 years building infrastructure and dev tools (as well as drones and sometimes, electric motorcycles). Most recently, I’ve been obsessed with talking to computers.
Building voice AI agents is tricky, and there’s lots to remember and setup, even when developing locally. So we built Layercode CLI — a command-line interface designed to help you get started with a voice AI agent in minutes.
Authenticate with your Layercode account and choose a template
Then, our CLI sets up your tunnel and webhook URLs for local development.
(we’ve tried to make it easy to build and deploy from there, too, more info on where to go next in our full quickstart guide: https://docs.layercode.com/tutor...)
What you get with Layercode CLI:
- Zero to production in minutes: Initialize a voice AI agent with one command, and get real-time speech-to-text (STT), text-to-speech (TTS), turn-taking, and low-latency audio delivery to our global edge network.
- Built-in tunneling: Test locally using our integrated tunnel — no need for copy-pasting your webhook urls into our dashboard.
- Sample agent backend: Backend logic receives conversation transcripts and responds based on your prompt and the tool calls you define. Deployable anywhere.
- Complete backend control: Use any LLM, and control your own agent logic and tools — Layercode handles all of the voice infrastructure.
- Edge-native voice AI deployment: Layercode is deployed to 330+ locations that process audio within ~50ms of users, anywhere in the world.
We’d love feedback from anyone building agents — especially if you’re experimenting with voice.
What feels smooth? What doesn't? What’s missing for your projects?
Layercode unlocks a huge world of integrated voice-based interactions for apps now that LLMs can handle real-time understanding and response. Having a pre-built, scalable infrastructure ready in minutes to build on means you can focus on crafting the actual experience rather than wrangling webhook APIs, latency, and security issues. Worth experimenting with if you’re exploring next-gen conversational products.
Layercode
Hey Product Hunt 👋
I’m Damien, CEO & co-founder of Layercode. I’ve spent the last 20 years building infrastructure and dev tools (as well as drones and sometimes, electric motorcycles). Most recently, I’ve been obsessed with talking to computers.
Building voice AI agents is tricky, and there’s lots to remember and setup, even when developing locally. So we built Layercode CLI — a command-line interface designed to help you get started with a voice AI agent in minutes.
It’s super easy to get started:
Run `npx @layercode/cli init`
Authenticate with your Layercode account and choose a template
Then, our CLI sets up your tunnel and webhook URLs for local development.
(we’ve tried to make it easy to build and deploy from there, too, more info on where to go next in our full quickstart guide: https://docs.layercode.com/tutor...)
What you get with Layercode CLI:
- Zero to production in minutes: Initialize a voice AI agent with one command, and get real-time speech-to-text (STT), text-to-speech (TTS), turn-taking, and low-latency audio delivery to our global edge network.
- Built-in tunneling: Test locally using our integrated tunnel — no need for copy-pasting your webhook urls into our dashboard.
- Sample agent backend: Backend logic receives conversation transcripts and responds based on your prompt and the tool calls you define. Deployable anywhere.
- Complete backend control: Use any LLM, and control your own agent logic and tools — Layercode handles all of the voice infrastructure.
- Edge-native voice AI deployment: Layercode is deployed to 330+ locations that process audio within ~50ms of users, anywhere in the world.
We’d love feedback from anyone building agents — especially if you’re experimenting with voice.
What feels smooth? What doesn't? What’s missing for your projects?