
DeepSeek
Open-source LLM optimized for advanced reasoning and code
4.8•41 reviews•2.3K followers
Open-source LLM optimized for advanced reasoning and code
4.8•41 reviews•2.3K followers
Intelligent assistant for coding, content creation, file reading, and more. Upload documents, engage in extended conversations, and receive expert assistance in AI, natural language processing, and beyond.
This is the 13th launch from DeepSeek. View more
DeepSeek-V4
Launching today
DeepSeek-V4 Preview is a new series of highly efficient MoE language models, featuring V4-Pro (1.6T params) and V4-Flash (284B params). Both models support a 1 million token context window by default, utilizing a novel hybrid attention architecture to drastically reduce compute and memory costs.






Free Options
Launch Team


Flowtica Scribe
Hi everyone!
The long-awaited DeepSeek V4 is finally here, and the message is simple: 1M context is becoming normal.
V4-Pro is the flagship model, with stronger agentic coding, world knowledge, and reasoning. V4-Flash is the fast, efficient version for more economical use. Both models support 1M context and are available through API today, with open weights already released.
DeepSeek’s real ambition here is to make frontier long-context intelligence more accessible, just like it has been doing all along🫡
P.S. Think about all the quota and money you’ve burned through just to unlock massive context windows in Codex or CC. Well, let’s look forward to a future where that no longer feels like a luxury. Thanks, DS!💙