World’s first comprehensive evaluation, observability and optimization platform to help enterprises achieve 99% accuracy in AI applications across software and hardware.
This is the 5th launch from Future AGI. View more

traceAI
Launching today
traceAI is OTel-native LLM tracing that actually works with your existing observability stack.
✓ Captures prompts, completions, tokens, retrievals, agent decisions
✓ Follows GenAI semantic conventions correctly
✓ Routes to any OTel backend—Datadog, Grafana, Jaeger, anywhere
✓ Python, TypeScript, Java, C# with full parity
✓ 35+ frameworks: OpenAI, Anthropic, LangChain, CrewAI, DSPy, and more
✓ Two lines of code to instrument your entire app
No new vendor. No new dashboard. Open source (MIT).



Free
Launch Team






Future AGI
Hey Product Hunt! 👋
I'm Nikhil from Future AGI, and I'm excited to share traceAI with you today.
The Problem We're Solving
If you're building with LLMs, you know the pain: your agent made 34 API calls, burned through your token budget, and returned the wrong answer. You have no idea why.
Existing LLM tracing tools force you into a new vendor dashboard. But most teams already have observability infrastructure - Datadog, Grafana, Jaeger. Why add another?
OpenTelemetry is the industry standard for application observability, but it was designed before AI existed. It understands HTTP latency. It has no concept of prompts, tokens, or reasoning chains.
What traceAI Does???
traceAI is the proper GenAI semantic layer on top of OpenTelemetry. It captures everything that matters in your AI application:
- Full prompts and completions
- Token usage per call
- Model parameters and settings
- RAG retrieval steps and sources
- Agent decisions and tool executions
- Errors with full context
- Latency at every layer
And sends it to whatever observability backend you already use.
Two lines of code:
from traceai import trace_ai
trace_ai.init()
Your entire GenAI app is now traced automatically.
Works with everything:
- Languages: Python, TypeScript, Java, C# (with full parity)
- Frameworks: OpenAI, Anthropic, LangChain, LlamaIndex, CrewAI, DSPy, Bedrock, Vertex AI, MCP, Vercel AI SDK, and 35+ more
- Backends: Datadog, Grafana, Jaeger, or any OpenTelemetry-compatible tool
- Actually follows GenAI semantic conventions. Not approximately. Correctly. So your traces are readable in any OTel backend without custom dashboards or parsing.
- Zero lock-in. Your data goes where you want it. Switch backends anytime. We don't even collect your traces.
- Open source. Forever. MIT licensed. Community-owned.
We're not building a walled garden.
Who Should Use This???
AI engineers debugging complex LLM pipelines
Platform teams who refuse to adopt another vendor
Anyone already running OTel who wants AI traces alongside application telemetry
Teams building agentic systems who need production-grade observability
What's Next???
We're actively working on:
- Go language support
- Expanded framework coverage
Try It Now
⭐ GitHub: https://shorturl.at/gKG7E
📖 Docs: https://shorturl.at/AlyjC
💬 Discord: https://shorturl.at/v4llu
We'd love your feedback! What observability challenges are you facing with your AI applications?
@nikhilpareek Thanks Nikhil — really appreciate it.
traceAI looks solid, especially the OpenTelemetry angle and the focus on zero lock-in. Love that you’re building around existing observability workflows instead of forcing teams into another dashboard.
I’m building TradeHQ from a different angle — helping beginners practice trading with $10K virtual cash before risking real money — but I respect the problem you’re solving. Wishing you a strong launch today 👊
Future AGI
@anuga_weerasinghe Thank you so much, Anuga
Two lines is impressive but curious - how does it handle agent decision tracking when you have nested tool calls 3-4 levels deep? Running a bunch of AI agents for project management workflows and the traces get messy fast. The GenAI semantic conventions piece is what's interesting here - most OTel solutions just treat LLM calls as HTTP and you lose all the context about what the model was actually doing.
Future AGI
@mykola_kondratiuk thats what it is designed to handle- AI workloads for complex agent set-up and not just http. You get everything from nested tool calls, prompts, tokens, etc. It helps you get all your LLM traces and make sense out of it- works with any Otel or current observability infra
That's exactly what I was hoping to hear - the HTTP abstraction loss is what kills debug cycles in complex pipelines. Will definitely try it on a multi-agent workflow I'm running.
Future AGI
@mykola_kondratiuk Would love for you to try and contribute, more importantly- share honest feedback so we can improve :)
Trying to set a standard for LLM tracing for the community
Will do - honest feedback is the only useful kind. Good luck with the launch today.
The OTel native approach is the right call imo. Every time I've tried an LLM observability tool it wants me to install yet another dashboard and I'm already drowning in Grafana tabs lol.
Two lines of code to instrument is bold. Does it handle multi-step agent chains well? Like if I have a LangChain agent that calls tools that call other models, does the trace show the full tree or does it flatten everything?
Future AGI
@mihir_kanzariya hey Mihir, thanks. Yes, it shows full trace tree, span level details, everything. We support 40+ agentic frameworks already including Langchain, crewAI, etc
If there is anything you think is missing, that we should add..I am super open for feedback :)
Hey TraceAI team, great product. was able to get started by giving claude your documentation in a single day. We use this with our internal grafana server so it was a small setup but loving it! thanks!
Future AGI
@naman_muley Thanks for trying and sharing it here :)
How does the Trace AI handles long running tasks or loops apart from standard loops? does it have any reasoning steps added to it?
Future AGI
@nayan_surya98 This is just built for trace collection for agentic (AI-native) systems.
Future AGI
Really enjoyed building this solution for AI pros. It gives you a clear look at how your AI agents are performing without any vendor lock in
Future AGI
@kartik_nvjk Great work bro!
Future AGI
GenAI observability has been broken for too long. TraceAI gets it right and this is the kind of observability layer every AI team needs but rarely has. Smart to make this open source and build trust first. Congrats team! 🚀
Future AGI
@vel_alagan Thanks Vel!