LFM2-VL is a new series of open-weight vision-language models from Liquid AI. Designed for on-device deployment, they offer up to 2x faster inference on GPU and come in 450M and 1.6B parameter sizes.
Liquid AI has been consistently pushing hard on on-device models, and now they're adding multimodal capabilities to the LFM2 series.
LFM2-VL is their latest answer. It's a new family of vision-language models designed for speed, with up to 2x faster inference on GPU compared to existing models.
They've released two versions: a tiny 450M and a more capable 1.6B, which is great for developers building for different device constraints.
Replies
Flowtica Scribe
Hi everyone!
Liquid AI has been consistently pushing hard on on-device models, and now they're adding multimodal capabilities to the LFM2 series.
LFM2-VL is their latest answer. It's a new family of vision-language models designed for speed, with up to 2x faster inference on GPU compared to existing models.
They've released two versions: a tiny 450M and a more capable 1.6B, which is great for developers building for different device constraints.