OpenAI-compatible endpoint. Single API, routes to the cheapest and fastest provider for each model. Works with closed and open LLMs. Real-time benchmarks (price, latency, load) run in the background. Usable direclty now on Roo and Cline forks
The main difference with OR, is that we are doing real time arbitrage within the many provider we reference, which allows you to always get the absolute best value for your \$ at the exact moment of inference, which we find very cool.
Report
Really exciting! The real-time arbitrage feature sounds like a game-changer for optimizing cost and performance. How does it handle model compatibility across different providers, especially with closed LLMs?
@evgenii_zaitsev1 It does very well on those subject, we spent a lot of time uniformising everything from tool call to prompt caching, so overall it works as if you had only a single API key. Closed LLMs were particularly difficult because most of them have their own framework, and act like they handle openai compatible endpoints but it doesn't work on all their features, so we have to create proxys to do that. Hoping that I answered your question!
Report
💸⚙️ MakeHub.ai launches today! Smart LLM provider arbitrage = max performance for every dollar spent. AI devs, it’s optimization season 📊🤖
Big congratulations on your product launch — it looks truly impressive and caught our attention! 🚀
I’m Liam from Scrapeless, and we’d love to explore a potential collaboration with you.
We offer a robust Deep SERP API, providing high-quality access to both Google Search and Google Trends data — fast, reliable, and tailored for AI-native products and analytics workflows.
We’d love to offer you free access to our API in exchange for a mention or shoutout on your Twitter or LinkedIn, and we're also happy to cover the promotion costs to help boost your visibility.
If this sounds interesting, I’d love to chat more — feel free to suggest a time or just reply here!
@rachitmagon Trivially! We ship an openAI compatible endpoint. Meaning that you simply have to modify the base_url and you are good to go with Both Crew AI and LangChain:
MakeHub.ai
Really exciting! The real-time arbitrage feature sounds like a game-changer for optimizing cost and performance. How does it handle model compatibility across different providers, especially with closed LLMs?
MakeHub.ai
@evgenii_zaitsev1 It does very well on those subject, we spent a lot of time uniformising everything from tool call to prompt caching, so overall it works as if you had only a single API key. Closed LLMs were particularly difficult because most of them have their own framework, and act like they handle openai compatible endpoints but it doesn't work on all their features, so we have to create proxys to do that. Hoping that I answered your question!
💸⚙️ MakeHub.ai launches today! Smart LLM provider arbitrage = max performance for every dollar spent. AI devs, it’s optimization season 📊🤖
Scrapeless
Hi Make Hub team,
Big congratulations on your product launch — it looks truly impressive and caught our attention! 🚀
I’m Liam from Scrapeless, and we’d love to explore a potential collaboration with you.
We offer a robust Deep SERP API, providing high-quality access to both Google Search and Google Trends data — fast, reliable, and tailored for AI-native products and analytics workflows.
We’d love to offer you free access to our API in exchange for a mention or shoutout on your Twitter or LinkedIn, and we're also happy to cover the promotion costs to help boost your visibility.
If this sounds interesting, I’d love to chat more — feel free to suggest a time or just reply here!
Smoopit
@romain_batlle Awesome work! Just curious, how can we use Crew AI with LangChain?
MakeHub.ai
@rachitmagon Trivially! We ship an openAI compatible endpoint. Meaning that you simply have to modify the base_url and you are good to go with Both Crew AI and LangChain:
More details in the doc here: https://www.makehub.ai/docs/basic-usage/quick-start
Smoopit
@romain_batlle Awesome, that makes it super easy. Thanks for clarifying