OpenLIT's Zero-code LLM Observability - Trace LLM requests + costs with OpenTelemetry monitoring
byβ’
Zero-code full-stack observability for AI agents and LLM apps. OpenTelemetry-native monitoring for LLMs, VectorDBs, and GPUs with built-in guardrails, evaluations, prompt hub, and a secure vault. Fully self-hostable anywhere.



Replies
OpenLIT
Hey Product Hunt! πππ
I'm Aman Agarwal, founder and maintainer of OpenLIT. After speaking with over 50 engineering teams in the past year, we consistently heard the same frustration: "We want to monitor our LLMs and Agents, but changing code and redeploying would slow down our launch."
Every team told us the same story: even though most LLM monitoring tools only require a few lines of integration code, the deployment overhead kills momentum. They'd spend days testing changes, rebuilding Docker images, updating deployment files, and coordinating deployments just to get basic LLM monitoring.
At scale, it's worse: imagine modifying and redeploying 10+ AI services individually.
That's why we built OpenLIT with true zero-code observability. No code changes, no image rebuilds, no deployment file changes.
Two paths, same result - choose what fits your setup:
We also learned teams have strong opinions about their observability stack, so while we use OpenLIT instrumentations by default, you can bring your own (OpenLLMetry, OpenInference, custom setups), and we just handle the zero-code injection part.
The best part? It works with whatever you're already using - OpenAI, Anthropic, LangChain, CrewAI, custom agents. No special SDKs or vendor lock-in.
See for yourself:
β GitHub: https://github.com/openlit/openlit
π Docs: https://docs.openlit.io/latest/operator/overview
π Quick Start: https://docs.openlit.io/latest/operator/quickstart
We're excited to launch OpenLIT's Zero-code LLM Observability capabilities on Product Hunt today. We'll be in the comments all day and can't wait to hear your thoughts & feedback! π
OpenLIT
@aman_agarwal8Β Exactly β and the best part is, it takes under 5 minutes to set up. No code changes, no redeploys of your LLM apps and AI Agents. Say hello to "No delaying of an AI feature because Observability wasn't added"
UI Bakery
Quick question: if I already use a logging/observability stack (e.g. Datadog, Prometheus, etc.), how easy is it to integrate OpenLIT without duplicating or conflicting metrics?
OpenLIT
@vladimir_lugovskyΒ OpenLIT will work on top of your existing stack. Even if you have an existing OpenTelemetry instrumentation, OpenLIT is designed to work alongside it.
MightyMeld for Tailwind
It's been amazing to watch Aman and the team grow since we first met - what over a year ago was it? OpenLIT is amazing β€οΈ wonderful team so much passion
OpenLIT