Launched this week

TextCompressor
Reduce your LLM API bill 11β45% with zero code changes
3 followers
Reduce your LLM API bill 11β45% with zero code changes
3 followers
TextCompressor is a drop-in proxy that compresses prompts before they reach your LLM β removing stop words and filler while preserving meaning. Point your existing OpenAI client at our API, add one header, done. β Light: 16.7% token savings, -2.7pp accuracy β Medium: 33.5% token savings, -5.1pp accuracy β Aggressive: 45.9% token savings, -6.6pp accuracy Works with OpenAI, Anthropic, Ollama, LM Studio β anything OpenAI-compatible. No AI used in compression β pure CPU
TextCompressor Reviews
Reviews
Most Informative
