Zac Zuo

LFM2.5 - The next generation of on-device AI

byβ€’
LFM2.5 model family is Liquid AI's most capable release yet for edge AI deployment. It builds on the LFM2 device-optimized architecture and represents a significant leap forward in building reliable agents on the edge.

Add a comment

Replies

Best
Zac Zuo

Hi everyone!

Been following Liquid AI for quite a while, and their unwavering commitment to on-device models has always been impressive. Seeing them launch LFM2.5 alongside AMD at CES feels like a definitive milestone, it perfectly integrates into the new wave of AI PCs.

Fitting a full modal stack (Text, Vision, Audio) into the 1B parameter range is a smart move for edge constraints. The 8x speedup in the Audio model is a significant improvement for latency, and the specific optimizations for AMD and Qualcomm NPUs show that this is built for actual hardware.

I really think 2026 is going to be the year on-device AI finally scales up.

Priyanka Madiraju

It's great to see on-device AI models. What are the minimum RAM requirements for LFM 2.5, and is it possible to run quantized versions?

shemith mohanan

Impressive direction. On-device speed + efficiency is where real adoption happens, especially for privacy-sensitive and latency-critical use cases. The hybrid architecture angle is interesting β€” curious to see how LFM2 performs in real-world edge scenarios compared to current lightweight LLMs.

Mykyta Semenov πŸ‡ΊπŸ‡¦πŸ‡³πŸ‡±

Great project! I’m still waiting for models for regular phones that can work offline.