AgentPulses – AI Agent Observability - Stop burning money blind on LLM API calls
by•
Running AI agents in production but have no idea what they actually cost? AgentPulse gives you full visibility into every LLM API call - costs, tokens, latency, and errors: in real time.
Install our Python plugin in 2 minutes with zero code changes.
No SDK. No wrappers. Just plug it in and instantly see where your money goes.
Works with Claude, GPT-4, MiniMax, and more. Free tier available, start monitoring your first agent today

Replies
Hey Product Hunt! I built AgentPulse because I was running AI agents and had no clue how much they were actually costing me. I'd check my API dashboard at the end of the month and get surprised every time.
AgentPulse sits alongside your agent and tracks every LLM call, what model was used, how many tokens, how long it took, whether it failed, and exactly what it cost.
What makes it different:
2-minute setup, pip install or NPM + config file, no code changes
Works across multiple LLM providers (Anthropic, OpenAI, MiniMax)
Smart alerts for cost spikes and consecutive failures.
Free tier to get started immediately.
I'd love your feedback, what features would make this a must-have for your AI workflow?