marius ndini

Switch between ChatGPT and Claude β€” without losing memory or context

byβ€’

We just shipped multi-provider support in @Mnexium AI β€” so you can change LLMs without resetting conversations, user context or memories.

The problem

When teams switch providers, they usually lose everything:

  • conversation history

  • user preferences

  • long-term memory

  • learned context

Every conversation starts from zero. Not great for UX β€” or retention.

What Mnexium does

Mnexium now works with both:

  • OpenAI (ChatGPT)

  • Anthropic (Claude)

your memory layer stays intact: chats, profiles, and long-term memory persist across providers.

Why it matters

πŸ”„ Flexibility
Start a thread on GPT-4, continue it on Claude β€” the context follows.

πŸ’° Smart routing
Send queries to their respectful models and complex ones to premium options β€” without rebuilding anything.

πŸ§ͺ True A/B testing
Compare ChatGPT vs Claude using the exact same memory and history.

πŸ›‘οΈ Built-in failover
Provider outage? Switch models instantly, without losing state.

πŸš€ Future-proof architecture
New LLM launches? Plug it in. Mnexium handles the memory layer.

How it works

Just include your provider key in the headers:

x-openai-key     β†’ ChatGPT models
x-anthropic-key  β†’ Claude models

Same endpoint. Same mnx parameters.
Memories, history, profiles, and agent state β€” all persist automatically.

Users never notice the switch.
They just experience an AI that remembers them.

More providers coming soon β€” which one should we add next?

see our docs (https://www.mnexium.com/docs)

90 views

Add a comment

Replies

Best
Tetiana

am I right I can share same project between two LLMs?

marius ndini

@tetianaiΒ Yes - Mnexium is independent of the model providers. So whether you use ChatGPT or Claude or any future supported model. Your history, memories, agent-states etc are all interchangeable. For example if we look at the code example in this gist.

We tell chatGPT that the users favorite color is blue. Then later we ask Claude to make a UI Theme and since Claude can recall that the users favorite color is blue, it will advise as such.

=== Cross-Provider Memory Sharing Demo ===

1. Telling GPT-4o about my favorite color...
   GPT-4o: That's great! Blue is often associated with calmness, tranquility, and trust. Do you have a particular shade of blue you like, or any favorite blue things?

   Waiting for memory to be stored...

2. Asking Claude about my favorite color...
   Claude: Since you've shared that your favorite color is blue, I would suggest using a blue-themed UI color palette for your user interface. Blue can create a calming and trustworthy atmosphere, which often aligns well with user preferences.

Some ideas for a blue-themed UI:

- Use different shades of blue as the primary colors, such as a darker navy blue for headers and menus, and lighter sky blue accents.
- Incorporate complementary colors like white, gray, or light neutrals to balance the blue and create visual hierarchy.
- Consider using pops of a secondary color like yellow or green to add visual interest and highlight important elements.
- Make sure the blue shades have sufficient contrast against text and backgrounds to ensure readability.

The key is to choose a blue palette that is aesthetically pleasing, accessible, and aligned with your personal preferences. Let me know if you'd like me to provide any specific color hex codes or design mockups to get you started.

=== Demo Complete ===
βœ“ Claude knew the color was blue because the memory was shared!
  Same subject_id = same memories, regardless of which LLM you use.
Tetiana

@marius_ndiniΒ wow, it looks like a rocket, since we're all using different LLMs for the same tasks.

marius ndini

@tetianaiΒ Appreciate the comment. We certainly think so and hope to fill a space that is currently neglected but highly value to LLM and AI builders.

marius ndini

We're also excited to announce that Gemini support is now live as well. @Mnexium AI now fully supports all big 3 model providers

Haiqa Irfan

when you add new providers will users be able to set priority routing rules, like sending certain tasks to specific LLMs automatically?

marius ndini

@haiqa_irfanΒ 

Great question! Priority routing is on our roadmap as we add more providers & features.

We're planning on smart routing options:

  • Cost-based routing β€” Route simple queries to cheaper models (GPT-4o-mini, Claude Haiku) and complex ones to premium models (GPT-4o, Claude Opus)

  • Task-based routing β€” Automatically send coding tasks to one model, creative writing to another

  • Fallback chains β€” If your primary provider fails or is rate-limited, automatically retry with a backup

  • Latency-based routing β€” Route to the fastest available provider for time-sensitive requests

How it might work:

In setting expectations very much in early stage ideation with this feature.

{
  "mnx": {
    "routing": {
      "primary": "gpt-4o",
      "fallback": ["claude-3-5-sonnet", "gpt-4o-mini"],
      "rules": [
        { "if": "task_type:code", "use": "claude-3-5-sonnet" },
        { "if": "token_estimate:<1000", "use": "gpt-4o-mini" }
      ]
    }
  }
}

Current state: Right now you can manually specify which model to use per request, and we support both OpenAI and Claude. The automatic routing rules are something we're actively designing.

Would love to hear what routing rules would be most valuable for your use case β€” that feedback helps us prioritize!