All activity
AI agents in production lack the observability they need.
When something breaks, developers have no trace of which
call failed, what the model received, or why costs spiked.
AgentLens solves this. Full session traces across every LLM
call — costs, latency, errors, and prompt/completion logs.
Framework-agnostic. Proxy-based integration requires zero
code changes.
Self-hostable via Docker Compose. TypeScript and Python SDKs
included. MIT licensed.

AgentLensFull observability for AI agents. Zero code changes.
Farzan Hossan Shaikatleft a comment
Hey Product Hunt 👋 Developers building AI agents in production face a problem most observability tools don't solve — visibility into full agent runs, not just individual API calls. AgentLens is built for that gap. The proxy approach means complete observability with a single environment variable change. No SDK required to get started. Works with OpenAI, Anthropic, LangChain, LlamaIndex, or any...

AgentLensFull observability for AI agents. Zero code changes.
