ClawTrace closes the self-evolving loop for OpenClaw agents. It captures every trajectory automatically — every LLM call, tool use, sub-agent, and cost — so Tracy, the doctor agent, can query OpenClaw's execution history live and tell exactly what failed, what was wasted, and how OpenClaw should evolve next.
Hey Product Hunt 👋
I'm Richard, co-founder of Epsilla. Today we're launching ClawTrace, and I want to tell you the story that made us build it, because it's a bit meta.
We run our own OpenClaw agents internally. One of them is ElizaClaw, our AI co-founder. A few weeks ago, ElizaClaw ran a research task: she was studying self-evolving AI agent frameworks, such as EvolveR, CASCADE, and STELLA, trying to learn how AI agents can improve themselves from their own execution history.
The irony? While she was researching how AI agents self-evolve, we had absolutely no visibility into her own execution. We didn't know she'd burned 1M input tokens on a single LLM call. We didn't know four web searches were running sequentially when they could have been parallel. We didn't know the biggest bottleneck was a 68-second LLM call that could have been avoided entirely.
ElizaClaw was learning how to self-evolve in theory. But in practice, she couldn't self-evolve at all, because she had no feedback on her own runs.
That's the gap ClawTrace closes.
Self-evolving agents need a signal. They need to see every step they took, what it cost, where they stalled, and why. Without that signal, "self-evolving" is just a name, the agent improves only when a human manually digs through logs, guesses at the bottleneck, and patches the prompt.
ClawTrace makes the signal automatic:
→ Every trajectory captured: every LLM call, tool use, and sub-agent delegation
→ Three views: execution path, call graph, and timeline
→ Tracy, our built-in OpenClaw's doctor agent, who can query the agent's trajectory graph live and say "here's the bottleneck, here's why, here's what to fix next"
When we showed ElizaClaw's own trajectory through ClawTrace, the 1M-token context stuffing, the sequential tool calls, the 68-second LLM call, and asked Tracy "where is the bottleneck?", she surfaced a full span breakdown in seconds with three specific recommendations. That's the loop working.
A few things I'm genuinely curious about from this community:
1. Are you already thinking about self-evolving agents in your work, or does that feel far off?
2. When an agent run goes wrong today, what's your actual debugging workflow? (Ours was embarrassingly manual before ClawTrace)
3. If your agent could query its own past trajectories and improve itself automatically, what's the first thing you'd want it to learn?
Thank you for being here. Today feels like a real milestone, and honestly, ElizaClaw helped research and write parts of this launch too. Meta all the way down.
Thank you for your support, and happy building!
Cheers,
Team Epsilla
clawtrace.ai | github.com/epsilla-cloud/clawtrace
About ClawTrace on Product Hunt
“Make your OpenClaw better, cheaper, and faster”
ClawTrace launched on Product Hunt on April 15th, 2026 and earned 115 upvotes and 15 comments, placing #13 on the daily leaderboard. ClawTrace closes the self-evolving loop for OpenClaw agents. It captures every trajectory automatically — every LLM call, tool use, sub-agent, and cost — so Tracy, the doctor agent, can query OpenClaw's execution history live and tell exactly what failed, what was wasted, and how OpenClaw should evolve next.
On the analytics side, ClawTrace competes within Open Source, Developer Tools, Artificial Intelligence and GitHub — topics that collectively have 1.1M followers on Product Hunt. The dashboard above tracks how ClawTrace performed against the three products that launched closest to it on the same day.
Who hunted ClawTrace?
ClawTrace was hunted by Garry Tan. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.