Launching today
Sled

Sled

Run your coding agent from your phone, with voice

261 followers

Sled lets you run your coding agent from your phone using voice. Coding agents need frequent input, but when you step away from your desk they just sit idle. Sled solves this by giving you a voice interface that connects securely to your local agent over Tailscale. Your code never leaves your machine. You talk, the agent works locally, and you hear the result read back. Works with Claude Code, OpenAI Codex, and Gemini CLI. Fully open source.
Sled gallery image
Sled gallery image
Sled gallery image
Sled gallery image
Sled gallery image
Free
Launch Team / Built With
Anima Playground
AI with an Eye for Design
Promoted

What do you think? …

Jack Bridger
Hey Product Hunt 👋 We built Sled because our coding agents kept getting stuck whenever we stepped away from our desks. They need input every 10–60 minutes — and terminals are terrible for that when you’re not sitting in front of them. Sled gives you a voice interface to your local coding agent. You talk from your phone, the agent runs on your computer, and you hear what it did. No code leaves your machine — everything goes over Tailscale. It’s fully open source and takes about 5 minutes to set up. We’d love feedback, questions, or ideas for where this could go next 🛷
Zolani Matebese

@jack_bridger Hi Jack, congrats on the launch. This is brilliant in theory; tell me how you've solved the silly practical problems; driving/transit, looking up architecture/features and other deskbound stuff?

Jack Bridger

@zolani_matebese Thanks! Good questions! I would say this is a supplement to desk work not a complete replacement. And of course I don't recommend driving with it but in theory it should be good anywhere it's safe to speak with voice.

Mohsin Ali ✪

@jack_bridger congrats!

what happens if the speech to text hallucinates a command? does it have a dry run mode or confirmation step for high risk shell commands?

Damien Tanner

@jack_bridger  @mohsinproduct voice messages don’t auto send, so you can review transcribed text before sending it.

Lakshay Gupta

What happens if the agent needs code review or visual context?

Jack Bridger

@lak7 for visual context, you can connect a browser skill and trust the agent to make a good choice. It's more for communicating high level tasks on the basis that you already trust it to do good stuff.

Chilarai M

Really awesome team! Super excited to try it out

Jack Bridger

@chilarai Thanks mate, hope you're well!

Chilarai M

@jack_bridger I'm great. Really happy to see you topping the charts

Samet Sezer

solving the need to babysit local agents every 10–60 minutes is such a smart wedge. is this agnostic to the agent framework I'm using, or is it optimized for specific ones like OpenDevin?

Jack Bridger

@samet_sezer Thanks! It's agnostic to any agent framework that supports ACP

Jeetendra Kumar

Congrats on the launch.What about reviewing the code state

Jack Bridger

@jeetendra_kumar2 right now it's focused on just telling the agent to do stuff and trusting them (though you can stop their activity completely). And less focused on reviews for this launch. But based on all the requests we should build in a great review flow!

Cruise Chen

Sled is absolutely a cure for most of the vibe builders - human in the loop is necessary! Great launch team!

Marek Nalikowski

Well done, congrats on the launch!

123
Next
Last