Qwen3.6-27B is a fully open-source dense model that punches way above its weight. Surpassing the previous 397B MoE flagship in agentic coding, it supports multimodal reasoning and thinking modes while remaining perfectly sized for local self-hosting.
Qwen3.6-Max-Preview is an early release of Qwen's next proprietary flagship. It delivers measurable improvements over Qwen3.6-Plus in agentic coding, world knowledge, and instruction following, securing top scores across major development benchmarks.
Qwen3.6-35B-A3B is a highly efficient open-source MoE model with 35B total and just 3B active parameters. It delivers frontier-level agentic coding and multimodal reasoning, rivaling much larger dense models. Apache 2.0 licensed and available now.
Qwen just released the Qwen3.5 Small Model Series — 0.8B, 2B, 4B and 9B. Native multimodal with improved architecture and scaled RL. 0.8B and 2B are tiny and fast for edge devices, 4B makes a strong lightweight agent base, and 9B is already closing the gap with much larger models. Base versions released too.
Qwen3.6-Plus is Qwen’s latest hosted model with a 1M context window, major gains in agentic coding, stronger multimodal reasoning, and much tighter support for real development workflows across tools like OpenClaw, Claude Code, and Qwen Code.
Qwen3.5-Omni is Qwen"s new native omni model for text, images, audio, and video, with stronger multilingual speech, realtime voice interaction, web search, function calling, voice cloning, and long-context audio/video understanding.
An open-weight, native vision-language model built for long-horizon agentic tasks. Its hybrid architecture (linear attention + MoE) delivers the capabilities of a 397B giant with the inference speed of a 17B model.
Qwen-Image-2512 is the new open-source SOTA for text-to-image generation. It delivers drastically improved photorealism, finer natural details, and superior text rendering.
Qwen-Image-Layered decomposes images into transparent RGBA layers, unlocking inherent editability. You can move, resize, or delete objects without artifacts. Supports recursive decomposition and variable layer counts.
A family of SOTA speech models (0.6B & 1.7B) supporting 10 languages. Features prompt-based Voice Design, 3s zero-shot cloning, and extreme low-latency streaming.