Open-source LLM tracing that speaks GenAI, not HTTP.
traceAI is OTel-native LLM tracing that actually works with your existing observability stack. ✓ Captures prompts, completions, tokens, retrievals, agent decisions ✓ Follows GenAI semantic conventions correctly ✓ Routes to any OTel backend—Datadog, Grafana, Jaeger, anywhere ✓ Python, TypeScript, Java, C# with full parity ✓ 35+ frameworks: OpenAI, Anthropic, LangChain, CrewAI, DSPy, and more ✓ Two lines of code to instrument your entire app No new vendor. No new dashboard. Open source (MIT).
Hey Product Hunt! 👋 I'm Nikhil from Future AGI, and I'm excited to share traceAI with you today.
The Problem We're Solving If you're building with LLMs, you know the pain: your agent made 34 API calls, burned through your token budget, and returned the wrong answer. You have no idea why. Existing LLM tracing tools force you into a new vendor dashboard. But most teams already have observability infrastructure - Datadog, Grafana, Jaeger. Why add another?
OpenTelemetry is the industry standard for application observability, but it was designed before AI existed. It understands HTTP latency. It has no concept of prompts, tokens, or reasoning chains.
What traceAI Does??? traceAI is the proper GenAI semantic layer on top of OpenTelemetry. It captures everything that matters in your AI application: - Full prompts and completions - Token usage per call - Model parameters and settings - RAG retrieval steps and sources - Agent decisions and tool executions - Errors with full context - Latency at every layer And sends it to whatever observability backend you already use.
Two lines of code: from traceai import trace_ai trace_ai.init()
Your entire GenAI app is now traced automatically. Works with everything: - Languages: Python, TypeScript, Java, C# (with full parity) - Frameworks: OpenAI, Anthropic, LangChain, LlamaIndex, CrewAI, DSPy, Bedrock, Vertex AI, MCP, Vercel AI SDK, and 35+ more - Backends: Datadog, Grafana, Jaeger, or any OpenTelemetry-compatible tool - Actuallyfollows GenAI semantic conventions. Not approximately. Correctly. So your traces are readable in any OTel backend without custom dashboards or parsing. - Zero lock-in. Your data goes where you want it. Switch backends anytime. We don't even collect your traces. - Open source. Forever. MIT licensed. Community-owned. We're not building a walled garden.
Who Should Use This??? AI engineers debugging complex LLM pipelines Platform teams who refuse to adopt another vendor Anyone already running OTel who wants AI traces alongside application telemetry Teams building agentic systems who need production-grade observability
We'd love your feedback! What observability challenges are you facing with your AI applications?
About traceAI on Product Hunt
“Open-source LLM tracing that speaks GenAI, not HTTP.”
traceAI launched on Product Hunt on April 1st, 2026 and earned 278 upvotes and 32 comments, earning #3 Product of the Day. traceAI is OTel-native LLM tracing that actually works with your existing observability stack. ✓ Captures prompts, completions, tokens, retrievals, agent decisions ✓ Follows GenAI semantic conventions correctly ✓ Routes to any OTel backend—Datadog, Grafana, Jaeger, anywhere ✓ Python, TypeScript, Java, C# with full parity ✓ 35+ frameworks: OpenAI, Anthropic, LangChain, CrewAI, DSPy, and more ✓ Two lines of code to instrument your entire app No new vendor. No new dashboard. Open source (MIT).
On the analytics side, traceAI competes within Open Source, Developer Tools, Artificial Intelligence and GitHub — topics that collectively have 1.1M followers on Product Hunt. The dashboard above tracks how traceAI performed against the three products that launched closest to it on the same day.
Who hunted traceAI?
traceAI was hunted by Nikhil Pareek. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Hey Product Hunt! 👋
I'm Nikhil from Future AGI, and I'm excited to share traceAI with you today.
The Problem We're Solving
If you're building with LLMs, you know the pain: your agent made 34 API calls, burned through your token budget, and returned the wrong answer. You have no idea why.
Existing LLM tracing tools force you into a new vendor dashboard. But most teams already have observability infrastructure - Datadog, Grafana, Jaeger. Why add another?
OpenTelemetry is the industry standard for application observability, but it was designed before AI existed. It understands HTTP latency. It has no concept of prompts, tokens, or reasoning chains.
What traceAI Does???
traceAI is the proper GenAI semantic layer on top of OpenTelemetry. It captures everything that matters in your AI application:
- Full prompts and completions
- Token usage per call
- Model parameters and settings
- RAG retrieval steps and sources
- Agent decisions and tool executions
- Errors with full context
- Latency at every layer
And sends it to whatever observability backend you already use.
Two lines of code:
from traceai import trace_ai
trace_ai.init()
Your entire GenAI app is now traced automatically.
Works with everything:
- Languages: Python, TypeScript, Java, C# (with full parity)
- Frameworks: OpenAI, Anthropic, LangChain, LlamaIndex, CrewAI, DSPy, Bedrock, Vertex AI, MCP, Vercel AI SDK, and 35+ more
- Backends: Datadog, Grafana, Jaeger, or any OpenTelemetry-compatible tool
- Actually follows GenAI semantic conventions. Not approximately. Correctly. So your traces are readable in any OTel backend without custom dashboards or parsing.
- Zero lock-in. Your data goes where you want it. Switch backends anytime. We don't even collect your traces.
- Open source. Forever. MIT licensed. Community-owned.
We're not building a walled garden.
Who Should Use This???
AI engineers debugging complex LLM pipelines
Platform teams who refuse to adopt another vendor
Anyone already running OTel who wants AI traces alongside application telemetry
Teams building agentic systems who need production-grade observability
What's Next???
We're actively working on:
- Go language support
- Expanded framework coverage
Try It Now
⭐ GitHub: https://shorturl.at/gKG7E
📖 Docs: https://shorturl.at/AlyjC
💬 Discord: https://shorturl.at/v4llu
We'd love your feedback! What observability challenges are you facing with your AI applications?