Qwen3.6-27B is a fully open-source dense model that punches way above its weight. Surpassing the previous 397B MoE flagship in agentic coding, it supports multimodal reasoning and thinking modes while remaining perfectly sized for local self-hosting.
Reviewers describe Qwen3 as a practical, fast model that holds up well for everyday work, prototyping, and code or website generation, with answer quality often close to bigger-name alternatives. Users especially like its speed, lightweight feel, and usefulness when other AI tools fall short technically, though one asks for better history, editing, and edge-case handling in the workflow. Founder feedback is similarly positive: the makers of JDoodle.ai and Knowlify say it powers agents and scores well for creativity.
+12
Summarized with AI
Reviews
All Reviews
Most Informative
Sydekiq — Your own AI agent, deployed on a private server, 24/7
Your own AI agent, deployed on a private server, 24/7
This is a dense model, not an MoE, and that matters.
Dense models often show unusually strong intelligence density for their size, and Qwen3.6-27B is a very good example of that. At 27B, it already pushes past Qwen3.6-35B-A3B on a number of key coding and reasoning tasks, and more importantly, it beats the previous open-source flagship Qwen3.5-397B-A17B across all major coding benchmarks. That is a pretty serious result for a dense checkpoint at this scale.
And 27B is also just a very sweet open-source size. It is not so large that normal users or small teams are locked out of deployment, but it is not small either — it still leaves a lot of headroom for real capability.
In the Qwen3.6 era, this has a very real chance of becoming their most popular open dense model.
Report
we're almost there. if that long term coding gets bumped up and reaches opus 4.5 levels we're looking at something serious. i reckon in about 6-9 months they are there or beyond that and approaching opus 4.6 levels. at that rate, for coding, running that locally on your hardware, it's hard to justify picking a frontier cloud model.
I’ve been using Qwen for building a simple code and website generator, and it works really well for fast iterations. Great for prototyping and lightweight generation.
What needs improvement
I need more on the history pages, a section when we can re-edit the input/process/output with easy UX. Basically, better handling of edge cases without extra prompting
vs Alternatives
I choose Qwen because it’s fast, lightweight, and great for turning ideas into simple, working code or websites. It was also the first web-based tool I explored for code generation, which made it easy to start prototyping right away.
Great launch! Qwen has been incredibly useful, especially when I reach a point where other AI services can no longer technically deliver what I need. I’m also excited to see it matching the “big players” in benchmark results. 2026 is shaping up to be very interesting.
I’ve been trying Qwen alongside GPT-4o, and honestly it feels great — it’s noticeably faster and cheaper, yet most of the time the answer quality is hard to tell apart. For quick everyday tasks, I barely notice any trade-offs, which makes it a super practical choice.
Flowtica Scribe
Hi everyone!
This is a dense model, not an MoE, and that matters.
Dense models often show unusually strong intelligence density for their size, and Qwen3.6-27B is a very good example of that. At 27B, it already pushes past Qwen3.6-35B-A3B on a number of key coding and reasoning tasks, and more importantly, it beats the previous open-source flagship Qwen3.5-397B-A17B across all major coding benchmarks. That is a pretty serious result for a dense checkpoint at this scale.
And 27B is also just a very sweet open-source size. It is not so large that normal users or small teams are locked out of deployment, but it is not small either — it still leaves a lot of headroom for real capability.
In the Qwen3.6 era, this has a very real chance of becoming their most popular open dense model.
we're almost there. if that long term coding gets bumped up and reaches opus 4.5 levels we're looking at something serious. i reckon in about 6-9 months they are there or beyond that and approaching opus 4.6 levels. at that rate, for coding, running that locally on your hardware, it's hard to justify picking a frontier cloud model.