Stimm

Stimm

A modular, real-time AI voice assistant platform

4 followers

The OpenA modular, real-time AI voice assistant platform built with Python (FastAPI) and Next.js. This project provides a flexible infrastructure for creating, managing, and interacting with voice agents using various LLM, TTS, and STT providers. Source Voice Agent Platform. Orchestrate ultra-low latency AI pipelines for real-time conversations over WebRTC. - stimm-ai/stimm
Stimm gallery image
Stimm gallery image
Stimm gallery image
Stimm gallery image
Free
Launch Team / Built With
Webflow | AI site builder
Webflow | AI site builder
Start fast. Build right.
Promoted

What do you think? …

Etienne Lescot
Hey Product Hunt! 👋 I’m Etienne, the creator of Stimm. The Backstory: I was trying to build voice agents for a personal project, but I kept running into the same frustration: Latency. The proprietary APIs were easy to use but slow (2-3s delay), and building a raw WebRTC pipeline from scratch was a nightmare of edge cases. I realized there wasn't a good open-source orchestration layer that was actually fast. So, I built one myself. What is Stimm? It's an open-source platform to orchestrate ultra-low latency AI voice pipelines. I designed it to handle the "boring" infrastructure parts (interruptions, VAD, WebRTC) so you can focus on the agent's personality and logic. Under the hood: ⚡ Ultra-low latency: Built on Python (FastAPI) & LiveKit. 🧩 Modular: I made it easy to swap providers (OpenAI, Mistral, Deepgram, ElevenLabs, etc.). 🐳 Self-hostable: Dockerized and ready to deploy on your own infra (AGPL v3). Why I'm posting today: As a solo developer, getting feedback on the architecture is crucial. I’m looking for other devs to try it out, break it, and let me know if the latency feels "real-time" enough for your needs. I'll be hanging out in the comments all day—ask me anything about the tech stack or the challenges of handling real-time audio in Python! Cheers, Etienne