OpenAI API Proxy

OpenAI API Proxy

Provides the same proxy OpenAI API for different LLM models.

8 followers

Provides the same proxy OpenAI API interface for different LLM models(OpenAI, Anthropic, Vertex AI, Gemini), and supports deployment to any Edge Runtime environment. ✅ Compatible with OpenAI API ✅ Support for multiple models ✅ Suitable for Edge environments
OpenAI API Proxy gallery image
OpenAI API Proxy gallery image
Free
Launch tags:APIOpen SourceGitHub
Launch Team / Built With
Tines
Tines
The intelligent workflow platform
Promoted

What do you think? …

rxliuli
Maker
📌
I'm using Vertex AI's Anthropic model, but found that many LLM tools don't support configuring it directly. This prompted me to develop an API proxy. With this proxy, I can seamlessly use other AI models in any tool that supports the OpenAI API. Although there are some commercial services that resell LLM tokens, they usually require routing through their servers. Well, there's no need for another third party to know how I'm using it. This proxy can be deployed to any Edge Runtime environment, such as Cloudflare Workers, which provides up to 100k free requests per day for individuals.
Kimberly Johnson
Congrats on launching this API proxy! Super useful for folks who wanna avoid routing through third parties. Can't wait to see how this evolves. Good luck!
Asher Dorian Thorne
If you're into AI, this proxy is a game changer! The fact that it's open-source and can be deployed on Edge Runtime environments is a big plus. No need for a third party. Good luck with the launch!
rxliuli
Maker
Update 2024-08-27 Several new models are supported, and Groq's speed is indeed amazingly fast. Why is it so quick? - [x] Groq - [x] DeepSeek - [x] Moonshot What's the next model you want to support?
rxliuli
Maker
Update 2024-08-29 - Supports Cerebras model, which seem to be extremely fast (1700 tok/s), refer to the blog https://cerebras.ai/blog/introdu... - Supports Azure OpenAI models. Due to potential inconsistencies between Deployment and Model, it is necessary to configure aliases specifically to specify the deployment names corresponding to the models, similar to `gpt-4o-mini:gpt-4o-mini-dev,gpt-35-turbo:gpt-35-dev`, other configurations are the same as the official ones