Mnexium Integrations feel like one of the most important parts of the platform because they solve a different problem than memory. It also outlines the completion of the feature-set for the platform. I don't think any more features will offer any more utility.
Memory helps an assistant remember durable user context over time. Integrations let it work with live operational data from external systems right when a response is being generated.
We just published a new case study on Cartly, an iOS app that uses Mnexium to power a full receipt-tracking AI workflow. We really wanted to see what it would take to get a demo like this up and running.
In the post, we walk through how Cartly uses:
Memory for user preferences and continuity
Records for structured receipts and receipt_items storage
A single mnx runtime object to control identity, history, recall, and record sync
Request trace packets for auditability and debugging in production
Most automation workflows can call a model, but still need substantial glue code for memory, personalization, and structured data. The Mnexium connector makes those capabilities native in n8n.
As out platform continues to grow and captures more of an AI workload. There will always be new features & improvements we can make. This is one of those, we've always had and seen a need in the platform to direct and instruct our memory generation layer. This is what memory polices offers - the ability to guide Mnexium's memory layer.
Why Memory Policies?
Not every app wants to memorize everything. Some teams need strict extraction rules for compliance, quality, or cost. Others need per-workflow behavior, like high-signal extraction in support chats and minimal extraction in casual chats.
Mnexium memories are great for capturing facts, preferences, and context from conversations. But many AI applications also need to manage structured business data events on a calendar, deals in a pipeline, contacts in a CRM, tasks on a board, inventory items, support tickets.
Until now, you had two choices: build a separate database and API layer for your structured data, or try to shoehorn everything into unstructured memories. Neither is ideal.
Hi all - I've built @Mnexium AI and I thought the fastest way to get folks to try was it to build a chat plug-in for websites. I am providing free keys (however much usage it may be) to anyone who is willing to try it.
The plug-in can be found on NPM https://www.npmjs.com/package/@m...
We just shipped @mnexium/chat: a single npm package that adds a polished, production-ready AI chat widget to any website. React, Next.js, Express, or plain HTML it just works, and most importantly it remembers.
Most AI memory systems treat all memories equally. Something mentioned two years ago carries the same weight as yesterday's conversation. That's not how human memory works and it creates awkward, irrelevant AI responses.
Today we launched Memory Decay, a feature that makes AI memory behave more like human memory. Frequently used memories stay strong. Unused ones naturally fade. The result is more relevant, contextual AI interactions.
When people talk about AI memory, it s usually framed from the developer s side. How do we store it? How do we retrieve it? How do we keep context alive? This is where @Mnexium AI started as well since that ecosystem is important.
But the initial vision and goal was very different and yet to be executed on.
What if users owned their memories not just the app owners?
Most AI apps eventually hit the same wall. They forget users unless you build a ton of infrastructure first. This means every AI dev eventually will end up building this infra to provide the best user experience needs for their agent and app.
What rolling your own really means:
Vector DBs + embeddings + tuning
Extracting memories from conversations (and resolving conflicts)
Designing user profile schemas and keeping them in sync
Managing long chat history + summarization pipelines
Juggling different formats across OpenAI, Claude, etc.