Zac Zuo

Kimi K2 Thinking - The 1T Parameters Open-Source Thinking Model - SOTA on HLE

byโ€ข
๐Ÿ”น SOTA on HLE (44.9%) and BrowseComp (60.2%) ๐Ÿ”น Executes up to 200 โ€“ 300 sequential tool calls without human interference ๐Ÿ”น Excels in reasoning, agentic search, and coding ๐Ÿ”น 256K context window

Add a comment

Replies

Best
Crystal J

๐Ÿ‘‹ Hello from Kimi Team!

Introducing Kimi K2 Thinking: 1T Open-Source Reasoning Model. SOTA with 44.9% HLE, 60.2% BrowseComp. (not just open-source SOTA)

> Trillion-param MoE, trained for $4.6M, 4x cheaper than peers.
> INT4 inference: 4-bit quantized, <1.2s latency @ 256K context.
> Full step-by-step reasoning, 200+ tool calls, self-correction (GPT-5 level), fully open (MIT), OpenAI-compatible API, weights live on huggingface today, agentic mode next week.

We're thrilled to ship a SOTA model that's fully open. Can't wait to see what you all build! :)

Zac Zuo

Hi everyone!

The K2 model from July was already strong, but it wasn't a "Thinking" model.

Kimi K2 Thinking is a new generation thinking agent model, built on Moonshot's "model as agent" philosophy. The key difference is that it natively understands how to "think while using tools."

It can build a real, functional Word editor:

And it can also create a world of complex, gorgeous voxel art:

Masum Parvej

@zaczuoย Would be cool if it could team up with other agents to tackle bigger stuff

Nate Barbettini

Woah, quantized INT4 inference is a big deal here! Congrats on the launch!

Chilarai M

Super responses! But why did you focus so much on slides?

Mykyta Semenov ๐Ÿ‡บ๐Ÿ‡ฆ๐Ÿ‡ณ๐Ÿ‡ฑ

Itโ€™s a bit inconvenient that you canโ€™t use it without registering. You even need to sign up for the first search. But the project is interesting anyway!