Switch between ChatGPT and Claude β without losing memory or context
We just shipped multi-provider support in @Mnexium AI β so you can change LLMs without resetting conversations, user context or memories.
The problem
When teams switch providers, they usually lose everything:
conversation history
user preferences
long-term memory
learned context
Every conversation starts from zero. Not great for UX β or retention.
What Mnexium does
Mnexium now works with both:
OpenAI (ChatGPT)
Anthropic (Claude)
your memory layer stays intact: chats, profiles, and long-term memory persist across providers.
Why it matters
π Flexibility
Start a thread on GPT-4, continue it on Claude β the context follows.
π° Smart routing
Send queries to their respectful models and complex ones to premium options β without rebuilding anything.
π§ͺ True A/B testing
Compare ChatGPT vs Claude using the exact same memory and history.
π‘οΈ Built-in failover
Provider outage? Switch models instantly, without losing state.
π Future-proof architecture
New LLM launches? Plug it in. Mnexium handles the memory layer.
How it works
Just include your provider key in the headers:
x-openai-key β ChatGPT models
x-anthropic-key β Claude models
Same endpoint. Same mnx parameters.
Memories, history, profiles, and agent state β all persist automatically.
Users never notice the switch.
They just experience an AI that remembers them.
More providers coming soon β which one should we add next?
see our docs (https://www.mnexium.com/docs)



Replies
MultiDrive
am I right I can share same project between two LLMs?
Mnexium AI
@tetianaiΒ Yes - Mnexium is independent of the model providers. So whether you use ChatGPT or Claude or any future supported model. Your history, memories, agent-states etc are all interchangeable. For example if we look at the code example in this gist.
We tell chatGPT that the users favorite color is blue. Then later we ask Claude to make a UI Theme and since Claude can recall that the users favorite color is blue, it will advise as such.
MultiDrive
@marius_ndiniΒ wow, it looks like a rocket, since we're all using different LLMs for the same tasks.
Mnexium AI
@tetianaiΒ Appreciate the comment. We certainly think so and hope to fill a space that is currently neglected but highly value to LLM and AI builders.
Mnexium AI
We're also excited to announce that Gemini support is now live as well. @Mnexium AI now fully supports all big 3 model providers
SEO Stuff
when you add new providers will users be able to set priority routing rules, like sending certain tasks to specific LLMs automatically?
Mnexium AI
@haiqa_irfanΒ
Great question! Priority routing is on our roadmap as we add more providers & features.
We're planning on smart routing options:
Cost-based routing β Route simple queries to cheaper models (GPT-4o-mini, Claude Haiku) and complex ones to premium models (GPT-4o, Claude Opus)
Task-based routing β Automatically send coding tasks to one model, creative writing to another
Fallback chains β If your primary provider fails or is rate-limited, automatically retry with a backup
Latency-based routing β Route to the fastest available provider for time-sensitive requests
How it might work:
In setting expectations very much in early stage ideation with this feature.
Current state: Right now you can manually specify which model to use per request, and we support both OpenAI and Claude. The automatic routing rules are something we're actively designing.
Would love to hear what routing rules would be most valuable for your use case β that feedback helps us prioritize!