
Cohere
Build incredible products with world-class language AI
4.9•13 reviews•521 followers
Build incredible products with world-class language AI
4.9•13 reviews•521 followers
We build high performance, secure language models for the enterprise. Our customizable, high-performance language models work on public, private, or hybrid clouds to ensure data security & exceptional support.
This is the 6th launch from Cohere. View more

Tiny Aya
Launching today
Tiny Aya is Cohere Labs"s 3.35B open-weight multilingual model family built for local use. It covers 70+ languages, goes deeper on underserved regions instead of shallow global coverage, and is small enough for phones, classrooms, and community labs.







Free
Launch Team





Flowtica Scribe
Hi everyone!
What stands out about Tiny Aya is that @Cohere did not treat multilingual AI as one flat problem.
Instead of forcing 70+ languages into one generic model, they built a 3.35B family with regional specialization: Earth for Africa and West Asia, Fire for South Asia, and Water for Asia-Pacific and Europe. That is a much smarter way to get stronger linguistic grounding and cultural nuance while still keeping the model small enough for local deployment.
Tiny Aya is built to run where people actually are: on local devices, in classrooms, in community labs, and in places where large-scale cloud infrastructure is not a given.
That is a pretty meaningful direction for multilingual AI.
@zaczuo Have you seen early wins from devs deploying Tiny Aya offline in low-connectivity spots like classrooms or villages?
It's a big deal for accessibility. The focus on underserved regions instead of just adding more European languages is the right call - there's a massive gap there. How does Tiny Aya perform on Hebrew specifically? And is it practical to fine-tune on domain-specific data at this size, or is 3.35B too small for meaningful customization?
local multilingual at 3.35B is interesting - have you benchmarked against the usual monolingual fine-tune approach? curious if regional specialization actually outperforms at task level.