Forums
What's the one thing you wish you could see inside your AI agent's brain?
I've been building ClawMetry for past 5 weeks. 90k+ installs across 100+ countries.
The observability features I built first were the ones I personally needed: a live execution graph (Flow tab), full decision transcripts (Brain tab), token cost tracking per session, and visibility into sub-agent spawns.
But I keep hearing variations of the same thing: "I don't really know what my agents are doing." And everyone means something slightly different by that.
For some it's costs. For some it's timing (why did this take 4 minutes?). For some it's trust (did the agent actually do what I think it did?). For some it's failures (where exactly did it break?).
So I want to ask you directly:
If you're running AI agents today -- what's the one thing missing from your observability setup? What would make you feel like you actually understand what's happening inside your agents?
Options I'm thinking about next:
- Alerting (get notified when an agent fails or goes over budget)
- Cost per task breakdown (not just per session)
- Agent run comparisons (before/after a prompt change)
- Memory snapshots (what did the agent "know" at each decision point)
Drop your answer below. The next feature I build will be heavily influenced by this thread.
(ClawMetry is free to try locally: pip install clawmetry. Cloud: app.clawmetry.com, $5/node/month, 7-day free trial.)
"Consilium Belli" – Summoning the Roman War Council to stress-test my landing page & business model
Roman generals never went into battle with an untested plan. They convened a consilium a no-holds-barred council of war where officers could openly criticize strategy, expose flaws, and prevent stupid mistakes before it was too late.
I m doing the same.


