LFM2

LFM2

New generation of hybrid models for on-device edge AI

183 followers

LFM2 by Liquid AI is a new class of open foundation models designed for on-device speed and efficiency. Its hybrid architecture delivers 2x faster CPU performance than Qwen3 and SOTA results in a tiny footprint.
This is the 5th launch from LFM2. View more

LFM2.5

Launched this week
The next generation of on-device AI
LFM2.5 model family is Liquid AI's most capable release yet for edge AI deployment. It builds on the LFM2 device-optimized architecture and represents a significant leap forward in building reliable agents on the edge.
LFM2.5 gallery image
LFM2.5 gallery image
LFM2.5 gallery image
LFM2.5 gallery image
Free
Launch Team
Intercom
Intercom
Startups get 90% off Intercom + 1 year of Fin AI Agent free
Promoted

What do you think? …

Zac Zuo

Hi everyone!

Been following Liquid AI for quite a while, and their unwavering commitment to on-device models has always been impressive. Seeing them launch LFM2.5 alongside AMD at CES feels like a definitive milestone, it perfectly integrates into the new wave of AI PCs.

Fitting a full modal stack (Text, Vision, Audio) into the 1B parameter range is a smart move for edge constraints. The 8x speedup in the Audio model is a significant improvement for latency, and the specific optimizations for AMD and Qualcomm NPUs show that this is built for actual hardware.

I really think 2026 is going to be the year on-device AI finally scales up.

Priyanka Madiraju

It's great to see on-device AI models. What are the minimum RAM requirements for LFM 2.5, and is it possible to run quantized versions?

Russell Dou

Any idea how well this would run on a phone? Would love to try it without needing a full laptop setup.

shemith mohanan

Impressive direction. On-device speed + efficiency is where real adoption happens, especially for privacy-sensitive and latency-critical use cases. The hybrid architecture angle is interesting — curious to see how LFM2 performs in real-world edge scenarios compared to current lightweight LLMs.

Mykyta Semenov 🇺🇦🇳🇱

Great project! I’m still waiting for models for regular phones that can work offline.