MiMo

MiMo

Xiaomi's Open Source Model, Born for Reasoning

283 followers

Open-source (Apache 2.0) LLM series 'born for reasoning.' Pre-trained & RL-tuned models (like the 7B) match o1-mini on math/code. Base/SFT/RL models released.
This is the 3rd launch from MiMo. View more
MiMo-V2-Flash

MiMo-V2-Flash

Ultra-fast 309B MoE model for coding & agents
MiMo-V2-Flash is a 309B MoE (15B active) model by Xiaomi. It is a powerful, efficient, and ultra-fast foundation language model that particularly excels in reasoning, coding, and agentic scenarios, while also serving as an excellent general-purpose assistant for everyday tasks.
MiMo-V2-Flash gallery image
MiMo-V2-Flash gallery image
MiMo-V2-Flash gallery image
MiMo-V2-Flash gallery image
MiMo-V2-Flash gallery image
MiMo-V2-Flash gallery image
MiMo-V2-Flash gallery image
MiMo-V2-Flash gallery image
MiMo-V2-Flash gallery image
Free
Launch Team
ace.me
ace.me
Your new website, email address & cloud storage
Promoted

What do you think? …

Zac Zuo

Hi everyone!

The open-source world has another interesting contender to watch. MiMo comes from @Xiaomi, and it is clear they are serious about this because they keep shipping. This model is the latest example, and it looks like it is already trading blows with heavyweights like DeepSeek V3.2 on key benchmarks.

MiMo also launched their own AI Studio where you can take the model for a spin. Plus, the API is free for a limited time!

Anton Loss

Pretty impressive! 🚀

As improvements become more gradual efficiency, speed, and licence start to matter more. And that's where models like this one truly shine!

Alex Cloudstar

Open source from Xiaomi… nice. 309B MoE with 15B active sounds actually runnable. If the 7B really hangs with o1-mini on math/code, that’s useful. I’ll try the SFT + RL ones on my 4090 tonight. Curious how “agentic” it feels vs a plain assistant.

Jay Dev

Whoa, MiMo looks incredible! Love the focus on reasoning, especially the MoE architecture. Wondering if there are any benchmarks comparing its energy efficiency to similar sized models? So cool!

Yu Pan

Wow, 309B MoE with 15B active parameters?! That’s some serious scale!

Quick question for the Xiaomi team: How does MiMo-V2-Flash balance its ultra-fast performance with accuracy in complex reasoning tasks (e.g., multi-step coding or long-context decision-making)? Any unique optimizations or trade-offs you’ve prioritized?

Excited to test its agentic capabilities! 🚀