Launching today
DeepSeek-V4

DeepSeek-V4

Towards Highly Efficient Million-Token Context Intelligence

1 follower

DeepSeek-V4 is a preview series of open Mixture-of-Experts LLMs: V4‑Pro (1.6T params, 49B active) and V4‑Flash (284B, 13B active), both with 1M-token context. New hybrid attention (CSA+HCA) cuts long-context compute and KV cache, plus mHC connections and the Muon optimizer for stability. Trained on 32T+ tokens and post-trained with expert specialization + consolidation.
DeepSeek-V4 gallery image
DeepSeek-V4 gallery image
DeepSeek-V4 gallery image
DeepSeek-V4 gallery image
Free
Launch Team