traceAI - Open-source LLM tracing that speaks GenAI, not HTTP.
by•
traceAI is OTel-native LLM tracing that actually works with your existing observability stack.
✓ Captures prompts, completions, tokens, retrievals, agent decisions
✓ Follows GenAI semantic conventions correctly
✓ Routes to any OTel backend—Datadog, Grafana, Jaeger, anywhere
✓ Python, TypeScript, Java, C# with full parity
✓ 35+ frameworks: OpenAI, Anthropic, LangChain, CrewAI, DSPy, and more
✓ Two lines of code to instrument your entire app
No new vendor. No new dashboard. Open source (MIT).



Replies
Future AGI
Hey Product Hunt! 👋
I'm Nikhil from Future AGI, and I'm excited to share traceAI with you today.
The Problem We're Solving
If you're building with LLMs, you know the pain: your agent made 34 API calls, burned through your token budget, and returned the wrong answer. You have no idea why.
Existing LLM tracing tools force you into a new vendor dashboard. But most teams already have observability infrastructure - Datadog, Grafana, Jaeger. Why add another?
OpenTelemetry is the industry standard for application observability, but it was designed before AI existed. It understands HTTP latency. It has no concept of prompts, tokens, or reasoning chains.
What traceAI Does???
traceAI is the proper GenAI semantic layer on top of OpenTelemetry. It captures everything that matters in your AI application:
- Full prompts and completions
- Token usage per call
- Model parameters and settings
- RAG retrieval steps and sources
- Agent decisions and tool executions
- Errors with full context
- Latency at every layer
And sends it to whatever observability backend you already use.
Two lines of code:
from traceai import trace_ai
trace_ai.init()
Your entire GenAI app is now traced automatically.
Works with everything:
- Languages: Python, TypeScript, Java, C# (with full parity)
- Frameworks: OpenAI, Anthropic, LangChain, LlamaIndex, CrewAI, DSPy, Bedrock, Vertex AI, MCP, Vercel AI SDK, and 35+ more
- Backends: Datadog, Grafana, Jaeger, or any OpenTelemetry-compatible tool
- Actually follows GenAI semantic conventions. Not approximately. Correctly. So your traces are readable in any OTel backend without custom dashboards or parsing.
- Zero lock-in. Your data goes where you want it. Switch backends anytime. We don't even collect your traces.
- Open source. Forever. MIT licensed. Community-owned.
We're not building a walled garden.
Who Should Use This???
AI engineers debugging complex LLM pipelines
Platform teams who refuse to adopt another vendor
Anyone already running OTel who wants AI traces alongside application telemetry
Teams building agentic systems who need production-grade observability
What's Next???
We're actively working on:
- Go language support
- Expanded framework coverage
Try It Now
⭐ GitHub: https://shorturl.at/gKG7E
📖 Docs: https://shorturl.at/AlyjC
💬 Discord: https://shorturl.at/v4llu
We'd love your feedback! What observability challenges are you facing with your AI applications?
@nikhilpareek Thanks Nikhil — really appreciate it.
traceAI looks solid, especially the OpenTelemetry angle and the focus on zero lock-in. Love that you’re building around existing observability workflows instead of forcing teams into another dashboard.
I’m building TradeHQ from a different angle — helping beginners practice trading with $10K virtual cash before risking real money — but I respect the problem you’re solving. Wishing you a strong launch today 👊
Future AGI
@anuga_weerasinghe Thank you so much, Anuga
@nikhilpareek Congrats on the launch. Quick question: For debugging agentic flows with LangChain/CrewAI, what's one traceAI insight that's saved you the most dev time in production?
Future AGI
@swati_paliwal great question, Swat! Biggest time-saver is being able to see exactly which step in a multi-step agent chain went wrong not just that the final answer was bad, but pinpointing whether it was retrieval, reranking, or generation that failed. With traceAI's span trees you get the full execution path, and inline evals like groundedness and chunk attribution score each step independently
Much needed! Since you’re positioning traceAI as a semantic layer over OpenTelemetry so do you see this becoming a standard like OTel itself or staying a developer-focused tool?
Future AGI
@lak7 We are trying to build this as a standard/foundation for the GenAI builder community
OpenOwl
The OTel native approach is the right call imo. Every time I've tried an LLM observability tool it wants me to install yet another dashboard and I'm already drowning in Grafana tabs lol.
Two lines of code to instrument is bold. Does it handle multi-step agent chains well? Like if I have a LangChain agent that calls tools that call other models, does the trace show the full tree or does it flatten everything?
Future AGI
@mihir_kanzariya hey Mihir, thanks. Yes, it shows full trace tree, span level details, everything. We support 40+ agentic frameworks already including Langchain, crewAI, etc
If there is anything you think is missing, that we should add..I am super open for feedback :)
Open-source LLM tracing is exactly what was missing.
I run Claude API calls in a Celery worker — two calls per job,
one at temperature=0 (deterministic analysis),
one at temperature=0.7 (generative rewrites).
Right now I log both manually with structlog.
But correlating a specific trace across the two calls
when something fails in production is still painful.
Does traceAI handle multi-step pipelines where the same job
triggers two separate LLM calls with different parameters?
Future AGI
@fabrice_gangitano Yes, TraceAI handles this natively, each LLM call will be treated as a span. The whole point of the OTel span tree model is exactly this. you'd create a parent span for your Celery job, then each LLM call becomes a child span by the auto-instrumentor
Future AGI
GenAI observability has been broken for too long. TraceAI gets it right and this is the kind of observability layer every AI team needs but rarely has. Smart to make this open source and build trust first. Congrats team! 🚀
Future AGI
@vel_alagan Thanks Vel!
Future AGI
Really enjoyed building this solution for AI pros. It gives you a clear look at how your AI agents are performing without any vendor lock in
Future AGI
@kartik_nvjk Great work bro!
How does the Trace AI handles long running tasks or loops apart from standard loops? does it have any reasoning steps added to it?
Future AGI
@nayan_surya98 This is just built for trace collection for agentic (AI-native) systems.
Two lines is impressive but curious - how does it handle agent decision tracking when you have nested tool calls 3-4 levels deep? Running a bunch of AI agents for project management workflows and the traces get messy fast. The GenAI semantic conventions piece is what's interesting here - most OTel solutions just treat LLM calls as HTTP and you lose all the context about what the model was actually doing.
Future AGI
@mykola_kondratiuk thats what it is designed to handle- AI workloads for complex agent set-up and not just http. You get everything from nested tool calls, prompts, tokens, etc. It helps you get all your LLM traces and make sense out of it- works with any Otel or current observability infra
That's exactly what I was hoping to hear - the HTTP abstraction loss is what kills debug cycles in complex pipelines. Will definitely try it on a multi-agent workflow I'm running.
Future AGI
@mykola_kondratiuk Would love for you to try and contribute, more importantly- share honest feedback so we can improve :)
Trying to set a standard for LLM tracing for the community
Will do - honest feedback is the only useful kind. Good luck with the launch today.
Hey TraceAI team, great product. was able to get started by giving claude your documentation in a single day. We use this with our internal grafana server so it was a small setup but loving it! thanks!
Future AGI
@naman_muley Thanks for trying and sharing it here :)
Simple Utm
The OTel-native approach is the right call here. Most LLM tracing tools force you into a new dashboard and a new vendor relationship. The fact that this routes to Datadog, Grafana, Jaeger means teams can use what they already have instead of adding yet another pane of glass to monitor.
Curious about one thing: how does traceAI handle tracing across multi-agent workflows where one agent calls another? Do the traces compose into a single parent span, or do they stay isolated per agent?
Congrats on the launch.
Future AGI
@najmuzzaman you get all the traces and spans, e2e visibility into each step that your agent takes so you know what breaks and where