Launching today

ZenMux
An Enterprise-Grade LLM Gateway with Automatic Compensation
250 followers
An Enterprise-Grade LLM Gateway with Automatic Compensation
250 followers
ZenMux is an enterprise-grade LLM gateway that makes AI simple and assured for developers through a unified API, smart routing, and an industry-first automatic compensation mechanism.














ZenMux
Hey Product Hunt! 👋
I'm Haize Yu, CEO of ZenMux. We’ve been heads-down building an enterprise-grade LLM gateway that actually puts its money where its mouth is. I’m thrilled to finally get your feedback on it today.
Why we built this
Scaling AI shouldn't feel like "fighting the infra." As builders, we grew tired of:
Juggling dozens of API keys and messy billing accounts.
Sudden "intelligence drops" or latency spikes in production.
Paying full price for hallucinations without any fallback. 😅
We thought: What if a gateway didn’t just route requests, but actually insured the outcome?
What ZenMux brings to your stack
Built-in Model Insurance: We’re the first to offer automatic credit compensation for poor outputs or high latency. We take the risk, so you don't have to.
Dual-Protocol Support: Full OpenAI & Anthropic compatibility. Works out-of-the-box with tools like Claude Code or Cline.
Transparent Quality (HLE): We conduct regular, open-source HLE (Human Last Exam) testing. We invest in these benchmarks to keep model routing honest.
High Availability: Multi-vendor redundancy means you’ll never hit a rate-limit ceiling.
Global Edge Network: Powered by Cloudflare for rock-solid stability worldwide.
Pricing that scales
Builder Plan: Predictable monthly subscriptions for steady development.
Pay-As-You-Go: No rate limits, no ceilings. Pure stability that scales freely with your traffic. Only pay for what you actually use.
Launch Special
Bump up your credits! For a limited time: Top up $100, get a $10 bonus (10% extra).
One last thing...
What’s the biggest "production nightmare" you've faced with LLMs? Drop a comment—I'm here all day to chat!
Stop worrying. Start building. 🚀
https://zenmux.ai
BiRead
Model insurance for AI infra? That’s new. Curious to try it.
ZenMux
@luke_pioneero Appreciate it! 🙏 You hit it — the model insurance is new, but honestly the best part is what comes with the payout: real edge cases from your own usage, ready to plug back in and make your product smarter.
Curious to hear what you think once you try it! 🚀
@luke_pioneero Thank you! We built it because we felt infra shouldn’t shift all risk to builders.
ZenMux
Lancepilot
Congrats on the launch, ZenMux.
While everyone is building on LLMs, you’re building the backbone. Unified, intelligent, and enterprise-ready, that’s how real AI infrastructure scales.
Wishing you powerful integrations and unstoppable momentum ahead.
Toki: AI Reminder & Calendar
mymap.ai
Excited to follow your journey. Great launch!
ZenMux
@victorzh Thanks! Appreciate it. Stoked to have you along for the ride — more coming soon!
@victorzh Thank you so much! Really appreciate the support 🙌
An auto-compensation LLM gateway will hit scale pain when “bad output” disputes and p99 latency spikes turn into noisy payout events without reproducible traces.
Best practice is OpenTelemetry GenAI semantic conventions plus per-request lineage (prompt hash, model, router decision, retries) and optional hedged requests or circuit breakers to tame tail latency.
How are you defining and verifying “poor quality” for payouts, and can customers export the full compensation case bundle for audit and fine-tuning?