Launching today
resillm

resillm

Production-ready resilience for LLM applications

1 follower

resillm is a resilience proxy for LLM applications. Add automatic retries, provider fallbacks, circuit breakers, budget controls, and Prometheus metrics to any LLM app. Works with OpenAI, Anthropic, Azure, Ollama. Zero code changes.
resillm gallery image
resillm gallery image
resillm gallery image
Free
Launch Team
AssemblyAI
AssemblyAI
Build voice AI apps with a single API
Promoted

What do you think? …

Maneesh Chaturvedi
Hey Product Hunt! 👋 I'm excited to share resillm — a project born from real pain. While building SpecFlow (an AI-powered spec-driven development platform), we kept hitting the same problems every team building with LLMs faces: outages, rate limits, cost explosions, and zero visibility. We found ourselves writing the same retry logic, fallback handlers, and budget tracking in multiple places. Every LLM call became wrapped in try-catch blocks. Our codebase was getting messy. That's when we realized: this resilience layer should be infrastructure, not application code. So we extracted it into resillm — a transparent proxy that handles all of this for you: Retries with exponential backoff (no more 429 crashes) Fallbacks between providers (primary down? Route to your backup) Circuit breakers (stop hammering a failing API) Budget controls (never wake up to a surprise bill) Metrics (know exactly what's happening) Works with OpenAI, Anthropic, Azure OpenAI, and Ollama. The best part? Zero code changes. Just change your base URL. What's next? This is just Phase 1. We're building toward LLM Chaos Engineering — think Netflix's Chaos Monkey, but for AI applications. Inject latency, simulate errors, test your guardrails before production does it for you. Looking for contributors! This is fully open source. We'd love help with: Adding providers (Bedrock, Vertex AI) Request caching Chaos injection features Documentation and examples Star us on GitHub, try it out, and let us know what you think! Happy to answer any questions.