Product Thumbnail

traceAI

Open-source LLM tracing that speaks GenAI, not HTTP.

Open Source
Developer Tools
Artificial Intelligence
GitHub

Hunted byNikhil PareekNikhil Pareek

traceAI is OTel-native LLM tracing that actually works with your existing observability stack. ✓ Captures prompts, completions, tokens, retrievals, agent decisions ✓ Follows GenAI semantic conventions correctly ✓ Routes to any OTel backend—Datadog, Grafana, Jaeger, anywhere ✓ Python, TypeScript, Java, C# with full parity ✓ 35+ frameworks: OpenAI, Anthropic, LangChain, CrewAI, DSPy, and more ✓ Two lines of code to instrument your entire app No new vendor. No new dashboard. Open source (MIT).

Top comment

Hey Product Hunt! 👋
I'm Nikhil from Future AGI, and I'm excited to share traceAI with you today.

The Problem We're Solving
If you're building with LLMs, you know the pain: your agent made 34 API calls, burned through your token budget, and returned the wrong answer. You have no idea why.
Existing LLM tracing tools force you into a new vendor dashboard. But most teams already have observability infrastructure - Datadog, Grafana, Jaeger. Why add another?

OpenTelemetry is the industry standard for application observability, but it was designed before AI existed. It understands HTTP latency. It has no concept of prompts, tokens, or reasoning chains.

What traceAI Does???
traceAI is the proper GenAI semantic layer on top of OpenTelemetry. It captures everything that matters in your AI application:
- Full prompts and completions
- Token usage per call
- Model parameters and settings
- RAG retrieval steps and sources
- Agent decisions and tool executions
- Errors with full context
- Latency at every layer
And sends it to whatever observability backend you already use.

Two lines of code:
from traceai import trace_ai
trace_ai.init()

Your entire GenAI app is now traced automatically.
Works with everything:
- Languages: Python, TypeScript, Java, C# (with full parity)
- Frameworks: OpenAI, Anthropic, LangChain, LlamaIndex, CrewAI, DSPy, Bedrock, Vertex AI, MCP, Vercel AI SDK, and 35+ more
- Backends: Datadog, Grafana, Jaeger, or any OpenTelemetry-compatible tool
- Actually follows GenAI semantic conventions. Not approximately. Correctly. So your traces are readable in any OTel backend without custom dashboards or parsing.
- Zero lock-in. Your data goes where you want it. Switch backends anytime. We don't even collect your traces.
- Open source. Forever. MIT licensed. Community-owned.
We're not building a walled garden.

Who Should Use This???
AI engineers debugging complex LLM pipelines
Platform teams who refuse to adopt another vendor
Anyone already running OTel who wants AI traces alongside application telemetry
Teams building agentic systems who need production-grade observability

What's Next???
We're actively working on:
- Go language support
- Expanded framework coverage
Try It Now
⭐ GitHub: https://shorturl.at/gKG7E
📖 Docs: https://shorturl.at/AlyjC
💬 Discord: https://shorturl.at/v4llu

We'd love your feedback! What observability challenges are you facing with your AI applications?

Comment highlights

Since this is fully OpenTelemetry-native, I assume it should work seamlessly with backends like SigNoz as well?

If yes might try it there too seems a cool tool

Open-source LLM tracing is exactly what was missing.

I run Claude API calls in a Celery worker — two calls per job,

one at temperature=0 (deterministic analysis),

one at temperature=0.7 (generative rewrites).

Right now I log both manually with structlog.

But correlating a specific trace across the two calls

when something fails in production is still painful.

Does traceAI handle multi-step pipelines where the same job

triggers two separate LLM calls with different parameters?

The OTel-native approach is the right call here. Most LLM tracing tools force you into a new dashboard and a new vendor relationship. The fact that this routes to Datadog, Grafana, Jaeger means teams can use what they already have instead of adding yet another pane of glass to monitor.

Curious about one thing: how does traceAI handle tracing across multi-agent workflows where one agent calls another? Do the traces compose into a single parent span, or do they stay isolated per agent?

Congrats on the launch.

Much needed! Since you’re positioning traceAI as a semantic layer over OpenTelemetry so do you see this becoming a standard like OTel itself or staying a developer-focused tool?

Hey TraceAI team, great product. was able to get started by giving claude your documentation in a single day. We use this with our internal grafana server so it was a small setup but loving it! thanks!

The OTel native approach is the right call imo. Every time I've tried an LLM observability tool it wants me to install yet another dashboard and I'm already drowning in Grafana tabs lol.

Two lines of code to instrument is bold. Does it handle multi-step agent chains well? Like if I have a LangChain agent that calls tools that call other models, does the trace show the full tree or does it flatten everything?

Two lines is impressive but curious - how does it handle agent decision tracking when you have nested tool calls 3-4 levels deep? Running a bunch of AI agents for project management workflows and the traces get messy fast. The GenAI semantic conventions piece is what's interesting here - most OTel solutions just treat LLM calls as HTTP and you lose all the context about what the model was actually doing.

How does the Trace AI handles long running tasks or loops apart from standard loops? does it have any reasoning steps added to it?

Really enjoyed building this solution for AI pros. It gives you a clear look at how your AI agents are performing without any vendor lock in

This is the one of the best open source open telemetry solution out there, no vendor lock in and one stop solution. Great Work team!

this is going to be a one stop solution for anyone who is building agents and exploring agentic architectures!

GenAI observability has been broken for too long. TraceAI gets it right and this is the kind of observability layer every AI team needs but rarely has. Smart to make this open source and build trust first. Congrats team! 🚀

About traceAI on Product Hunt

Open-source LLM tracing that speaks GenAI, not HTTP.

traceAI launched on Product Hunt on April 1st, 2026 and earned 278 upvotes and 32 comments, earning #3 Product of the Day. traceAI is OTel-native LLM tracing that actually works with your existing observability stack. ✓ Captures prompts, completions, tokens, retrievals, agent decisions ✓ Follows GenAI semantic conventions correctly ✓ Routes to any OTel backend—Datadog, Grafana, Jaeger, anywhere ✓ Python, TypeScript, Java, C# with full parity ✓ 35+ frameworks: OpenAI, Anthropic, LangChain, CrewAI, DSPy, and more ✓ Two lines of code to instrument your entire app No new vendor. No new dashboard. Open source (MIT).

traceAI was featured in Open Source (68.3k followers), Developer Tools (511k followers), Artificial Intelligence (466.2k followers) and GitHub (41.2k followers) on Product Hunt. Together, these topics include over 182.5k products, making this a competitive space to launch in.

Who hunted traceAI?

traceAI was hunted by Nikhil Pareek. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.

Reviews

traceAI has received 9 reviews on Product Hunt with an average rating of 5.00/5. Read all reviews on Product Hunt.

Want to see how traceAI stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.