Latitude is an observability and quality platform for AI agents. It helps developers find and fix failure modes before they reach production.
Most tools give you logs. Latitude gives you issues: failure modes with states and evals attached.
This is the 7th launch from Latitude. View more

Latitude for Claude Code
Launching today
Trace every Claude Code session. See the full system prompt, every tool call, every subagent, and token cost per turn. One command to install, free, your traces stay in your account. Receive a weekly report with your stats.





Free
Launch tags:Developer Tools
Launch Team / Built With






Latitude
Hey everyone,
It's Cesar, founder of Latitude.
If you keep hitting your Claude Code limits faster than you'd expect, this shows you why. Full session trace: system prompt, every tool call, subagent spawns, per-turn token cost. You see where your context actually went and which actions burn the most.
Same thing works if you're using Claude Code as a harness for your own agent: track cost and latency per session, and get recurring failures auto-flagged across runs.
Install is one command:
It's free, traces stay in your Latitude account.
Happy to answer technical questions in the thread.
LayerProof
this is genuinely what we need, especially being able to spot which sessions burn way more tokens than others lol.
Does this support the Claude Team plan, or API-only for now?
Latitude
@nathan_tran2 both are supported!
Does it work for Claude Cowork projects? I mainly burn my limits on Cowork sessions.
Latitude
@michael_vavilov We only support Claude Code at the moment. But if you want to hit your limits less, I suggest you to try Claude Code. It's not that different from Cowork and it can do the same things without spending as many tokens.
- On macOS the installer writes
~/Library/LaunchAgents/so.latitude.claude-code-telemetry.plist which runs
launchctl setenv BUN_OPTIONS=--preload=... on every login. That sets
BUN_OPTIONS for every Bun process on your machine, not just claude — so any
other Bun-based tool you run will also load their preload shim. Wider blast
radius than "just Claude Code."
^^ Might be worthwhile reeling that in a bit. Every Bun process is a bit overreaching.
Latitude
@robert_douglass Thanks, taking a look asap
Latitude
@robert_douglass Unfortunately this was the only reliable way to capture telemetry for the claude desktop app in mac OS. The impact in other Bun processes is neglibile, though. Feel free to DM me if you need more details.
The captures locally framing caught my attention — curious how the telemetry hook actually intercepts Claude Code's runtime. Is it patching the Node.js fetch layer, or hooking at the MCP transport level? Asking because system prompts in Claude Code often contain sensitive workspace context (repo structure, file contents), so understanding the data path before it hits Latitude's servers matters a lot for teams in regulated environments.
This is solving a real problem. I run automated agents (social media engagement, security audits, competitor monitoring) and the hardest part isn't building them — it's knowing when they silently fail. I built a custom "doctor" module that diagnoses and self-heals agent errors, but a proper observability layer would have saved me weeks.
The "auto-generated evals from production failures" is the killer feature here. How granular is the token cost tracking per agent task?
Does the recurring failure detection use heuristics, or are you applying some ML based pattern analysis?
Latitude
@xavier_hernandez2 Both