fmerian

Layercode CLI - Build voice AI agents with one command

by
Build a voice AI agent in minutes, straight from your terminal. One command scaffolds your project with built-in tunneling, sample backends, and global edge deployment. Connect via webhook, use existing agent logic, pay only for speech (silence is free).

Add a comment

Replies

Best
Damien Tanner

Hey Product Hunt 👋

I’m Damien, CEO & co-founder of Layercode. I’ve spent the last 20 years building infrastructure and dev tools (as well as drones and sometimes, electric motorcycles). Most recently, I’ve been obsessed with talking to computers.

Building voice AI agents is tricky, and there’s lots to remember and setup, even when developing locally. So we built Layercode CLI — a command-line interface designed to help you get started with a voice AI agent in minutes.

It’s super easy to get started:

  1. Run `npx @layercode/cli init`

  2. Authenticate with your Layercode account and choose a template

Then, our CLI sets up your tunnel and webhook URLs for local development.

(we’ve tried to make it easy to build and deploy from there, too, more info on where to go next in our full quickstart guide: https://docs.layercode.com/tutor...)

What you get with Layercode CLI:

- Zero to production in minutes: Initialize a voice AI agent with one command, and get real-time speech-to-text (STT), text-to-speech (TTS), turn-taking, and low-latency audio delivery to our global edge network.

- Built-in tunneling: Test locally using our integrated tunnel — no need for copy-pasting your webhook urls into our dashboard.

- Sample agent backend: Backend logic receives conversation transcripts and responds based on your prompt and the tool calls you define. Deployable anywhere.

- Complete backend control: Use any LLM, and control your own agent logic and tools — Layercode handles all of the voice infrastructure.

- Edge-native voice AI deployment: Layercode is deployed to 330+ locations that process audio within ~50ms of users, anywhere in the world.

We’d love feedback from anyone building agents — especially if you’re experimenting with voice.

What feels smooth? What doesn't? What’s missing for your projects?

Mohsin Ali ✪

@layercode  @dctanner Vercel moment for voice AI! Congrats to you Damien and the team on an excellent release.

Aidan Hornsby

Thank you @mohsinproduct! Very kind words :) 

Aidan Hornsby

Hey Product Hunters!

We also just launched a startup program designed to help early-stage teams building voice AI experiences.

If you’re a startup building voice AI agents or features: we're offering $2,000 in free credits, a direct line to our dev team for support building, and priority access to test new models and features.

Apply here → https://layercode.com/startups

fmerian
Hunter

@aidanhornsby welcome back to Product Hunt, Aidan! 🐐

Aidan Hornsby

Thank you @fmerian! It's great to be back ☺️ 

Roozbeh Firoozmand

Love it for instant local tests plus edge deploy is clutch

Jack Bridger

@roozbehfirouz Thanks, appreciate the support!

Aidan Hornsby

@roozbehfirouz yes! this is exactly why we built the CLI. More to edge soon — we want to move as much of the voice and audio processing as close to the user as we can to make sure people anywhere in the world (not just North America) get a smooth conversational experience when talking to Layercode agents.

fmerian
Hunter

I've been following your journey through /p/layercode—found your TTS voice AI model guide, how to write prompts for voice AI agents, and how you use coding agents super insightful. Super pumped to see you launch this first release.

Curious if any CLI tools inspired you when building @Layercode CLI? ?makers

Enjoy your launch day, and keep up the great work.

Jack Bridger

@fmerian Yes, Trigger.dev and Cloudflare Wrangler were two big ones! Especially Trigger.dev as we love their onboarding experience.

fmerian
Hunter

@jackbridger oh yes @maverickdotdev @samejr and team are the best

Jack Bridger
James Ritchie
@fmerian as are you guys! 🙌
Abdul Rehman

How easy is it to swap between different LLMs mid-project?

Aidan Hornsby

@abod_rehman super easy! You can swap model provider for any agent straight from the voice picker in the agent's dashboard:

Stu Kennedy
@abod_rehman we’ve built Layercode so you can bring your own backend Agent. So you’re in control of which LLM you are using. You could swap it mid-conversation if you wanted to.
Allegra Poschmann

This looks cool! How many templates are available?

Jack Bridger

@allegra_poschmann1 Thanks so much, we have seven templates right now

  • Customer Support Agent

  • Executive Assistant

  • Drive-Thru Assistant

  • E-Commerce Voice Assistant

  • Automotive Assistant

  • Shipping & Logistics Assistant

  • Helpful Assistant

Alex Slade

Nice!

I used you guys on a personal project a while back and loved the output, but connecting it up to my existing backend obviously had some manual steps and upkeep. Can I be lazy and ask for your summary of how this helps existing backends / ongoing development? Or if you have any opinions of where the CLI will go next that would be useful to know.

Damien Tanner

@alexheeton we saw this pain point and that’s why we made the cli. It does all the config setup for you and connects your agent backend to the Layercode voice pipeline. It’s opmtiized for new projects today, but we have features on the roadmap for connecting existing projects.

Alex Slade

@dctanner Cool, so there's just some login token / auth step for the CLI and it can handle the rest?

Vladimir Lugovsky

Looks awesome! Can Layercode fallback to non-voice mode (e.g. text), if voice is not available for some reason?

Jack Bridger

@vladimir_lugovsky We're launching this very soon!

Aidan Hornsby

Thanks @vladimir_lugovsky! We don't have native support for this right now (today we only handle all of the components needed to give an AI agent a voice), but this is absolutely something that the backend agent logic that we integrate with can be built to handle.

If it's something you're looking at building drop us a note here or aidan at layercode dot com and we can help.

Krishna Gupta
Do you support real-time user interruptions while the agent is speaking? Congrats on the launch! Your posts are super well written!!
Aidan Hornsby

Thanks @krishna_gupta51! Yes, our default 'automatic' mode lets users speak freely, and interrupt the AI agent at any time.

Interruption handling and turn taking have a lot of edges cases, though (e.g. is a user is in a noisy environment, the noise can inadvertently interrupt the AI’s response). Because of this, we allow you to disable agent's ability to be interrupted. When this is disabled, the user’s response will only be heard after the AI has finished speaking.

We also offer a push to talk mode where the user must hold down a button (or key) to speak. We've found this to be more popular than we expected as it can actually be a very effective way to give the user full control in unpredictable situations, and that's often preferable for a smoother UX.

Any other questions, please let us know!

Armon Arani

This is great. We're looking at something like this to collect info from clients vs form builders.

Aidan Hornsby

@armonarani very smart. We see that people tend to be way more willing to share nuanced feedback via voice VS text. Will ping you to chat more about that and see if we can help!

12
Next
Last