Launched this week

Edgee
The AI Gateway that TL;DR tokens
221 followers
The AI Gateway that TL;DR tokens
221 followers
Edgee compresses prompts before they reach LLM providers and reduces token costs by up to 50%. Same code, fewer tokens, lower bills.





Free Options
Launch Team / Built With




Hey Product Hunt 👋
I’m Sacha, co-founder of Edgee. Thanks for checking us out!
We built Edgee because we kept seeing the same thing everywhere:
AI cost is going crazy!!!
LLMs are easy to try, but once you ship them in production, costs explode and reliability becomes a mess.
Most teams start with direct calls to OpenAI or Anthropic… or simply using a coding assistant... then quickly end up dealing with:
Unpredictable token spend
Multiple provider APIs
Outages / rate limits
Security & privacy constraints
And no real observability across teams
Edgee is an AI Gateway built to reduce LLM costs and simplify production inference.
It gives you a single OpenAI-compatible API across providers, plus a layer of intelligence around inference:
✅ Token compression to remove redundant tokens and cut costs, with no semantic loss
✅ Routing & fallbacks across providers
✅ Observability + cost tracking you can trust
✅ Privacy & security controls (ZDR, BYOK...)
✅ Support for public + private models,
✅ & Edge Tools 🚀
We're launching early and working closely with a small group of design partners, so feedback (even brutal feedback 😅) would mean a lot.
Happy to answer any questions here, and I’d love to hear how you’re handling LLM infra in production today!
Sacha
Go edgee! Would love to know if you handle MCP and Tool usage optimisations? It's a real pain for long running agents
Hey @marek_kalnik ! We don't manage MCPs for now, but we have developed Edge Tools.
These are tools executed at the gateway level, before or after the call to the model. They can be verifications, transformations, enrichments, controls.... memory access!
Love the focus on production problems vs demo features. Does the cost tracking integrate with existing observability tools (DataDog, etc.)?
@nielsrolland You raise a very interesting point! For now, we allow data to be exported in csv/json, but we're already working on integrating partner solutions. If you know our history (which seems to be the case), you know how easy it is for us to send data to any solution... so we're not going to hold back from offering this feature to our users ;)
Sqreen
Exciting launch! Congrats team
@paulblei Thanks a lot! Really appreciate it 🙌
If you get a chance to try Edgee, we’d love to hear what you think.
Thank you very much @paulblei . I must admit that the whole team is very excited as well. When we had the idea of using our edge computing skills to improve inference, I didn't have to insist for long to get buy-in, lol
Tellers.ai
@picsoung @rguignar Would looove to see Edgee plugged into Tellers, that’s a perfect fit, especially with agent/tool-heavy workflows where context can grow fast.
If you’d like, happy to help you set it up or jump on a quick call to make integration smooth.
Tellers.ai
Token costs are the new database query problem. This feels like the right abstraction layer.
How's the latency impact in practice?
I've been waiting to see companies start tackling this issue. Cost and efficiency are going to be increasingly important once AI platforms are increasingly pressured for revenue.