AI agents in production lack the observability they need.
When something breaks, developers have no trace of which
call failed, what the model received, or why costs spiked.
AgentLens solves this. Full session traces across every LLM
call — costs, latency, errors, and prompt/completion logs.
Framework-agnostic. Proxy-based integration requires zero
code changes.
Self-hostable via Docker Compose. TypeScript and Python SDKs
included. MIT licensed.