Forums

Integrations: bringing live external data into Mnexium runtime

Mnexium Integrations feel like one of the most important parts of the platform because they solve a different problem than memory. It also outlines the completion of the feature-set for the platform. I don't think any more features will offer any more utility.

Memory helps an assistant remember durable user context over time. Integrations let it work with live operational data from external systems right when a response is being generated.

Introducing Cartly: An iOS Receipt Tracking App Built on Mnexium

We just published a new case study on Cartly, an iOS app that uses Mnexium to power a full receipt-tracking AI workflow. We really wanted to see what it would take to get a demo like this up and running.

In the post, we walk through how Cartly uses:

  • Memory for user preferences and continuity

  • Records for structured receipts and receipt_items storage

  • A single mnx runtime object to control identity, history, recall, and record sync

  • Request trace packets for auditability and debugging in production

Open Sourcing CORE-MNX: Durable Memory for LLMs

Today we re open-sourcing the core memory engine behind @Mnexium AI: CORE-MNX.

GItHub

Introducing the Mnexium n8n Connector

Why We Built It

Most automation workflows can call a model, but still need substantial glue code for memory, personalization, and structured data. The Mnexium connector makes those capabilities native in n8n.

npm install n8n-nodes-mnexium

Introducing Memory Policies

As out platform continues to grow and captures more of an AI workload. There will always be new features & improvements we can make. This is one of those, we've always had and seen a need in the platform to direct and instruct our memory generation layer. This is what memory polices offers - the ability to guide Mnexium's memory layer.

Why Memory Policies?

Not every app wants to memorize everything. Some teams need strict extraction rules for compliance, quality, or cost. Others need per-workflow behavior, like high-signal extraction in support chats and minimal extraction in casual chats.

Records: Structured Data for AI Applications

Why Records?

Mnexium memories are great for capturing facts, preferences, and context from conversations. But many AI applications also need to manage structured business data events on a calendar, deals in a pipeline, contacts in a CRM, tasks on a board, inventory items, support tickets.

Until now, you had two choices: build a separate database and API layer for your structured data, or try to shoehorn everything into unstructured memories. Neither is ideal.

JavaScript and Python SDKs for Mnexium

The Mnexium SDKs give you a complete memory infrastructure as a service. Install the package, pass your LLM provider key, and your AI remembers.

Node (https://www.npmjs.com/package/@m...)

Python (https://pypi.org/project/mnexium/)

🆓 Mnexium Free Tier — Easy API, No Signup

Quick update we just launched a free tier that requires zero signup.

You can now use Mnexium without creating an account.

Just make an API call with your own OpenAI or Anthropic key, and we auto-provision a trial key for you on the spot.

Video demo: How Mnexium adds persistent memory & context to AI applications

This short demo shows how Mnexium works as a memory and context layer for AI apps.

Mnexium sits between your app and the LLM to provide:

  • Persistent memory across sessions

  • Inspectable & resumable chat history

  • Structured user profiles and long-term context

  • Automatic recall and injection no prompt juggling

The goal is simple: AI apps that remember users, stay consistent, and feel stateful by default.

Free Drop-in “Chat with X” for your app — NPM package for your site

Hi all - I've built @Mnexium AI and I thought the fastest way to get folks to try was it to build a chat plug-in for websites. I am providing free keys (however much usage it may be) to anyone who is willing to try it.

The plug-in can be found on NPM https://www.npmjs.com/package/@m...

npm install @mnexium/chat

🚀 @mnexium/chat — Drop-In AI Chat for Any Web App

We just shipped @mnexium/chat: a single npm package that adds a polished, production-ready AI chat widget to any website. React, Next.js, Express, or plain HTML it just works, and most importantly it remembers.

The Problem

Adding AI chat to a product usually means:

We Built a Live AI Memory Demo — Try It Now

See AI Memory in Action

We just shipped something we're really excited about: a fully interactive demo where you can experience AI with persistent memory no signup required.

mnexium.com/chat

Memory Decay: AI Memory That Forgets Like Humans Do

Most AI memory systems treat all memories equally. Something mentioned two years ago carries the same weight as yesterday's conversation. That's not how human memory works and it creates awkward, irrelevant AI responses.

Today we launched Memory Decay, a feature that makes AI memory behave more like human memory. Frequently used memories stay strong. Unused ones naturally fade. The result is more relevant, contextual AI interactions.

AI Is Learning About You. You Should Own What It Learns

When people talk about AI memory, it s usually framed from the developer s side. How do we store it? How do we retrieve it? How do we keep context alive? This is where @Mnexium AI started as well since that ecosystem is important.

But the initial vision and goal was very different and yet to be executed on.

What if users owned their memories not just the app owners?

🧠 Memory Graphs: Visualize How Your AI Remembers

When building AI agents with long-term memory, debugging is a challenge.

You know something was remembered but:
When was it created?
What replaced it?
Why is it being recalled now?

Why was it created as a memory in the first place?

🚀 New Provider: Google Gemini Support is Live!

@Mnexium AI Now supports all three major AI providers!

OpenAI ChatGPT models

Anthropic Claude Models

🧠 AI apps need memory but building it yourself is brutal

Most AI apps eventually hit the same wall. They forget users unless you build a ton of infrastructure first. This means every AI dev eventually will end up building this infra to provide the best user experience needs for their agent and app.

What rolling your own really means:

  • Vector DBs + embeddings + tuning

  • Extracting memories from conversations (and resolving conflicts)

  • Designing user profile schemas and keeping them in sync

  • Managing long chat history + summarization pipelines

  • Juggling different formats across OpenAI, Claude, etc.

  • Hosting, scaling, backups, monitoring

Switch between ChatGPT and Claude — without losing memory or context

We just shipped multi-provider support in @Mnexium AI so you can change LLMs without resetting conversations, user context or memories.

The problem

When teams switch providers, they usually lose everything:

marius ndini

3mo ago

Mnexium — a memory layer so AI apps don’t forget anything

Most AI products eventually hit the same problem

They forget who the user is and the experience feels generic again.