Launched this week

EcoToken
AI Prompt Optimization and Prompt Library Management.
3 followers
AI Prompt Optimization and Prompt Library Management.
3 followers
Optimize AI prompts, track real token + dollar savings, and calibrate per project. Works with Claude, GPT-4o, Gemini, and any LLM you already use. Free to start, $5/mo with your own Anthropic key.



Last quarter I spent $340 on Claude API for one project. When I looked at
the logs, ~50% of my tokens were going to clarification follow-ups —
prompts that weren't specific enough the first time.
So I built EcoToken: it rewrites your prompts to be denser, then tracks
the actual tokens + dollars you saved across runs. Works with Claude,
GPT-4o, Gemini, or any LLM (BYO key for $5/mo, free tier with 10 runs).
Live at: ecotoken.dev
Free tier needs no card.
Happy to answer any technical questions about the optimization approach
(it's a mix of whitespace compression + Claude Haiku rewriting + per-
project calibration).
RiteKit Company Logo API
@benjamin_barnes2 This is a clever insight on a real pain point. The $340 quarterly spend with half wasted on clarification loops is exactly the kind of inefficiency that compounds silently until you actually audit it. Your approach of using Haiku for rewriting plus per-project calibration makes sense—those prompt-specific baselines probably matter a lot since optimization isn't one-size-fits-all.