Product Thumbnail

ClawTrace

Make your OpenClaw better, cheaper, and faster

Open Source
Developer Tools
Artificial Intelligence
GitHub

Hunted byGarry TanGarry Tan

ClawTrace closes the self-evolving loop for OpenClaw agents. It captures every trajectory automatically — every LLM call, tool use, sub-agent, and cost — so Tracy, the doctor agent, can query OpenClaw's execution history live and tell exactly what failed, what was wasted, and how OpenClaw should evolve next.

Top comment

Hey Product Hunt 👋 I'm Richard, co-founder of Epsilla. Today we're launching ClawTrace, and I want to tell you the story that made us build it, because it's a bit meta. We run our own OpenClaw agents internally. One of them is ElizaClaw, our AI co-founder. A few weeks ago, ElizaClaw ran a research task: she was studying self-evolving AI agent frameworks, such as EvolveR, CASCADE, and STELLA, trying to learn how AI agents can improve themselves from their own execution history. The irony? While she was researching how AI agents self-evolve, we had absolutely no visibility into her own execution. We didn't know she'd burned 1M input tokens on a single LLM call. We didn't know four web searches were running sequentially when they could have been parallel. We didn't know the biggest bottleneck was a 68-second LLM call that could have been avoided entirely. ElizaClaw was learning how to self-evolve in theory. But in practice, she couldn't self-evolve at all, because she had no feedback on her own runs. That's the gap ClawTrace closes. Self-evolving agents need a signal. They need to see every step they took, what it cost, where they stalled, and why. Without that signal, "self-evolving" is just a name, the agent improves only when a human manually digs through logs, guesses at the bottleneck, and patches the prompt. ClawTrace makes the signal automatic: → Every trajectory captured: every LLM call, tool use, and sub-agent delegation → Three views: execution path, call graph, and timeline → Tracy, our built-in OpenClaw's doctor agent, who can query the agent's trajectory graph live and say "here's the bottleneck, here's why, here's what to fix next" When we showed ElizaClaw's own trajectory through ClawTrace, the 1M-token context stuffing, the sequential tool calls, the 68-second LLM call, and asked Tracy "where is the bottleneck?", she surfaced a full span breakdown in seconds with three specific recommendations. That's the loop working. A few things I'm genuinely curious about from this community: 1. Are you already thinking about self-evolving agents in your work, or does that feel far off? 2. When an agent run goes wrong today, what's your actual debugging workflow? (Ours was embarrassingly manual before ClawTrace) 3. If your agent could query its own past trajectories and improve itself automatically, what's the first thing you'd want it to learn? Thank you for being here. Today feels like a real milestone, and honestly, ElizaClaw helped research and write parts of this launch too. Meta all the way down. Thank you for your support, and happy building! Cheers, Team Epsilla clawtrace.ai | github.com/epsilla-cloud/clawtrace

Comment highlights

I need to collect a database of objects (for example, hotels) from other websites, with specific fields and information for each object. Can I do this with your service? What is the cost of live search?

@renchu_song the 1M token burn on a single LLM call with no visibility is a very relatable war story — does ClawTrace surface cost attribution per agent or per task/subtask? Trying to understand if the granularity is enough to catch runaway sub-agents before they crater a budget.

Cool project but you guys def need a design work for your branding/logo. aye human one

The 'powered by your private data' part is what matters here. Most agent platforms force you to feed everything into someone else's cloud. How do you handle data residency — can everything stay on-prem, or is there a hybrid option for teams that need both?

One of the coolest launch of the day! Btw once it identifies bottlenecks, how are fixes applied like automatically, suggested or human in the loop???

About ClawTrace on Product Hunt

Make your OpenClaw better, cheaper, and faster

ClawTrace launched on Product Hunt on April 15th, 2026 and earned 115 upvotes and 15 comments, placing #13 on the daily leaderboard. ClawTrace closes the self-evolving loop for OpenClaw agents. It captures every trajectory automatically — every LLM call, tool use, sub-agent, and cost — so Tracy, the doctor agent, can query OpenClaw's execution history live and tell exactly what failed, what was wasted, and how OpenClaw should evolve next.

ClawTrace was featured in Open Source (68.3k followers), Developer Tools (511k followers), Artificial Intelligence (466.2k followers) and GitHub (41.2k followers) on Product Hunt. Together, these topics include over 182.4k products, making this a competitive space to launch in.

Who hunted ClawTrace?

ClawTrace was hunted by Garry Tan. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.

Reviews

ClawTrace has received 1 review on Product Hunt with an average rating of 5.00/5. Read all reviews on Product Hunt.

Want to see how ClawTrace stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.