The 1T Parameters Open-Source Thinking Model - SOTA on HLE
πΉ SOTA on HLE (44.9%) and BrowseComp (60.2%) πΉ Executes up to 200 β 300 sequential tool calls without human interference πΉ Excels in reasoning, agentic search, and coding πΉ 256K context window
Introducing Kimi K2 Thinking: 1T Open-Source Reasoning Model. SOTA with 44.9% HLE, 60.2% BrowseComp. (not just open-source SOTA)
> Trillion-param MoE, trained for $4.6M, 4x cheaper than peers. > INT4 inference: 4-bit quantized, <1.2s latency @ 256K context. > Full step-by-step reasoning, 200+ tool calls, self-correction (GPT-5 level), fully open (MIT), OpenAI-compatible API, weights live on huggingface today, agentic mode next week.
We're thrilled to ship a SOTA model that's fully open. Can't wait to see what you all build! :)
About Kimi K2 Thinking on Product Hunt
βThe 1T Parameters Open-Source Thinking Model - SOTA on HLEβ
Kimi K2 Thinking launched on Product Hunt on November 7th, 2025 and earned 215 upvotes and 6 comments, placing #5 on the daily leaderboard. πΉ SOTA on HLE (44.9%) and BrowseComp (60.2%) πΉ Executes up to 200 β 300 sequential tool calls without human interference πΉ Excels in reasoning, agentic search, and coding πΉ 256K context window
On the analytics side, Kimi K2 Thinking competes within Open Source, Artificial Intelligence and Development β topics that collectively have 540.3k followers on Product Hunt. The dashboard above tracks how Kimi K2 Thinking performed against the three products that launched closest to it on the same day.
Who hunted Kimi K2 Thinking?
Kimi K2 Thinking was hunted by Zac Zuo. A βhunterβ on Product Hunt is the community member who submits a product to the platform β uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
π Hello from Kimi Team!
Introducing Kimi K2 Thinking: 1T Open-Source Reasoning Model. SOTA with 44.9% HLE, 60.2% BrowseComp. (not just open-source SOTA)
> Trillion-param MoE, trained for $4.6M, 4x cheaper than peers.
> INT4 inference: 4-bit quantized, <1.2s latency @ 256K context.
> Full step-by-step reasoning, 200+ tool calls, self-correction (GPT-5 level), fully open (MIT), OpenAI-compatible API, weights live on huggingface today, agentic mode next week.
We're thrilled to ship a SOTA model that's fully open. Can't wait to see what you all build! :)