Edgee

Edgee

The edge intelligence layer for AI inference

1 follower

One OpenAI-compatible API for 200+ models. Route by cost/latency/quality, add failovers, and get end-to-end observability, with configurable privacy controls.
Edgee gallery image
Edgee gallery image
Edgee gallery image
Free Options
Launch Team
ace.me
ace.me
Your new website, email address & cloud storage
Promoted

What do you think? …

Sacha MORARD
Maker
📌
Hey I’m Sacha, co-founder of Edgee. Thanks for checking us out! We built Edgee because teams told us the same thing over and over: “Using LLMs is easy. Running them safely, cheaply, and reliably in production is not.” Most teams today: - Call provider APIs directly (OpenAI, Anthropic, etc.) - Have no real control over cost, security, or reliability - And end up rebuilding the same infra over and over Edgee is an edge-native AI Gateway with intelligence around inference. We run lightweight workloads before the LLM call to: - Select the right model for each prompt - Remove or redact sensitive data (PII) - Enforce cost and usage policies - Add observability and failover across providers All through a single, OpenAI-compatible API. We’re launching early and working closely with design partners. Feedback (good or bad) would mean a lot. Happy to answer any questions here 👇