LFM2-Audio - Real-time audio conversations on-device
LFM2-Audio defines a new class of audio foundation models: lightweight, multimodal, and real-time. By unifying audio understanding and generation in one compact system, it enables conversational AI on devices where speed, privacy, and efficiency matter most.



Replies
Flowtica Scribe
Hi everyone!
Liquid AI's new LFM2-Audio unifies the entire voice stack into a single model.
The traditional (and still mainstream) way to build voice apps is complex. You have to chain together STT -> LLM -> TTS. It's slow, complex, and not ideal for on-device use.
LFM2-Audio is trying to fix this by integrating that whole process into one lightweight, end-to-end model. It's a 1.5B model that handles speech-to-speech, speech-to-text, and text-to-speech all on its own.
It's built for on-device use and is under 100ms latency, incredibly fast!
Congrats on launch! This looks really powerful!
Is it possible to run this on a mobile device yet? Alternatively, do you know of a way to run it in the cloud, and connect to it from a mobile device?
Hi@tleyden !
Let me answer:
Is it possible to run this on a mobile device yet?
Not yet, but soon. We made it tiny (1.5B) so it can run on phones, but we still need to work on the Leap Edge SDK to make the deployment painless.
Do you know of a way to run it in the cloud, and connect to it from a mobile device?
Yes, you can use this open-source python SDK to build the backend
https://github.com/Liquid4All/liquid-audio
that you and wrap with FastAPI.
Let me ask you something:
> What biz problem do you want to solve using this model?
@tleyden @pau_labarta_bajo what server config is needed to host it?
as a developer, would like to compare the hosting cost vs current voice AI API services cost.
It is great product though looking forward for LiveKit and Indian native languages support, thank you