Halton Labs

Halton Meter - Per-project LLM cost tracking. Local. No code changes.

by
Halton Meter is a local proxy that observes outbound LLM traffic, attributes every request to a project, and computes exact cost. One command. No code changes. Captures Claude, OpenAI, Gemini, Grok — plus OAuth surfaces like ChatGPT and Gemini Code Assist.

Add a comment

Replies

Best
Halton Labs
Maker
📌
I run a small software studio that builds client work heavily using Claude. The problem I kept running into: clients would ask what the AI actually cost, and I had no clean answer. Token-count estimates drifted from the real bill, and nothing told me which project was spending what. I looked at SDK wrappers first. The problem is they only catch calls you make directly — not ChatGPT, not Gemini Code Assist, not anything going through a tool you don't control. A local proxy captures everything on the wire without touching a single line of application code. So Halton Meter intercepts outbound LLM API traffic, attributes each request to a project (env var, working directory, or process tree), and writes the cost to a local SQLite database. Nothing about how you call the API changes. `pipx install halton-meter` then `halton-meter init --apps` is the entire install. Supports Claude, OpenAI, Gemini, and Grok — including OAuth surfaces. No cloud. No tracking. Your data stays on your machine. Happy to answer questions on how the attribution chain works or how it handles HTTPS interception.