All activity
Olmo Hybrid is a fully open 7B model that combines transformer attention with linear RNN layers. Utilizing a 3:1 pattern of Gated DeltaNet to attention, it matches the accuracy of Olmo 3 on MMLU while using 49% fewer tokens.

Olmo Hybrid7B open model mixing transformers and linear RNNs
SERA is a family of open coding models (8B, 14B, 32B) trained with a new efficient method. SERA learns from "soft-verified" data, drastically reducing training costs. Easily adaptable to private codebases. Open weights, data & recipes.

SERAFast, accessible coding agents that adapt to any repo

