LLMs are great at writing code but struggle with writing tool calls. Code Mode solves this by giving your model ONE powerful tool: a TypeScript execution sandbox with access to your entire toolkit. Complex, multi-step workflows thereby become a single batched execution. Independent studies show massive gains: 60% faster execution, 68% fewer tokens, 88% fewer round-trips, while Anthropic's research team found a massive reduction of 98.7% in token usage using CodeMode.
Today we're launching the first implementation of Code Mode, a concept first put forth by Cloudflare, and later advocated for by Anthropic. Code Mode is the simplest way to let LLMs call MCP & UTCP tools by writing code, instead of tool calls.
The core idea: LLMs are great at writing code because they've seen a lot of code in their training data, but they're terrible at tool calling because they've barely seen it.
So instead of exposing hundreds of tool schemas, we give them one tool: a TypeScript execution sandbox with access to your entire toolkit enabling faster execution, fewer tokens and a significant reduction in context overhead for complex workflows.
Independent studies show massive gains: 60% faster execution, 68% fewer tokens, 88% fewer round-trips, while Anthropic's research team found a massive reduction of 98.7% in token usage using CodeMode.
Would love to see what you build with it. Join us on Discord, follow us on GitHub, or reach out if you want support integrating it into your agent stack.
And as always, we'd appreciate your feedback and support on today's launch!
Code Mode launched on Product Hunt on November 23rd, 2025 and earned 150 upvotes and 8 comments, placing #9 on the daily leaderboard. LLMs are great at writing code but struggle with writing tool calls. Code Mode solves this by giving your model ONE powerful tool: a TypeScript execution sandbox with access to your entire toolkit. Complex, multi-step workflows thereby become a single batched execution. Independent studies show massive gains: 60% faster execution, 68% fewer tokens, 88% fewer round-trips, while Anthropic's research team found a massive reduction of 98.7% in token usage using CodeMode.
Code Mode was featured in Open Source (68.3k followers), Artificial Intelligence (466.2k followers), GitHub (41.2k followers) and SDK (738 followers) on Product Hunt. Together, these topics include over 117.4k products, making this a competitive space to launch in.
Who hunted Code Mode?
Code Mode was hunted by fmerian. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Want to see how Code Mode stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.
Hi Product Hunters,
I'm Ali, one of the folks behind UTCP, building together with @radulescu_razvan, @juan_viera_garcia1 and our amazing community of contributors.
Today we're launching the first implementation of Code Mode, a concept first put forth by Cloudflare, and later advocated for by Anthropic. Code Mode is the simplest way to let LLMs call MCP & UTCP tools by writing code, instead of tool calls.
The core idea: LLMs are great at writing code because they've seen a lot of code in their training data, but they're terrible at tool calling because they've barely seen it.
So instead of exposing hundreds of tool schemas, we give them one tool: a TypeScript execution sandbox with access to your entire toolkit enabling faster execution, fewer tokens and a significant reduction in context overhead for complex workflows.
Independent studies show massive gains: 60% faster execution, 68% fewer tokens, 88% fewer round-trips, while Anthropic's research team found a massive reduction of 98.7% in token usage using CodeMode.
Would love to see what you build with it. Join us on Discord, follow us on GitHub, or reach out if you want support integrating it into your agent stack.
And as always, we'd appreciate your feedback and support on today's launch!