Records: Structured Data for AI Applications
Why Records?
Mnexium memories are great for capturing facts, preferences, and context from conversations. But many AI applications also need to manage structured business data — events on a calendar, deals in a pipeline, contacts in a CRM, tasks on a board, inventory items, support tickets.
Until now, you had two choices: build a separate database and API layer for your structured data, or try to shoehorn everything into unstructured memories. Neither is ideal.
Records give you a schema-driven, queryable, semantically-searchable data layer that lives right alongside your memories — and your AI can create and update records automatically from conversations, no extra code required.
How It Works
Records are built on three layers: Schemas define the shape of your data, the CRUD API lets you manage records programmatically, and the AI extraction pipeline lets your AI create and update records automatically from natural language.
1. Define a Schema
A schema tells Mnexium what fields a record type has, which are required, and how to display them. Schemas are defined per-project and can be updated at any time.
// JavaScript SDK
await mnx.records.defineSchema('events', {
title: { type: 'string', required: true, description: 'Event title' },
date: { type: 'string', required: true, description: 'Date (YYYY-MM-DD)' },
time: { type: 'string', description: 'Time (HH:MM)' },
location: { type: 'string', description: 'Location or address' },
description: { type: 'string', description: 'Additional details' },
}, {
displayName: 'Events',
description: 'Calendar events and appointments',
});Schemas support string, number, and reference types. Required fields are validated on insert.
# Python SDK
mnx.records.define_schema('events', {
'title': { 'type': 'string', 'required': True, 'description': 'Event title' },
'date': { 'type': 'string', 'required': True, 'description': 'Date (YYYY-MM-DD)' },
'time': { 'type': 'string', 'description': 'Time (HH:MM)' },
'location': { 'type': 'string', 'description': 'Location or address' },
'description': { 'type': 'string', 'description': 'Additional details' },
}, display_name='Events', description='Calendar events and appointments')Or via the REST API directly:
POST /api/v1/records/schemas
Authorization: Bearer mnx_...
{
"type_name": "events",
"display_name": "Events",
"description": "Calendar events and appointments",
"fields": {
"title": { "type": "string", "required": true },
"date": { "type": "string", "required": true },
"time": { "type": "string" },
"location": { "type": "string" },
"description": { "type": "string" }
}
}2. Create and Update Records
Each record gets a unique record_id, an auto-generated embedding for semantic search, and a human-readable summary. Updates are partial merges — send only the fields you want to change.
// Create a record
const event = await mnx.records.insert('events', {
title: 'Doctor appointment',
date: '2026-03-15',
time: '10:00',
location: 'Philadelphia',
});
// event.record_id → "rec_2fe18b47-..."
await mnx.records.update('events', event.record_id, {
time: '14:00',
location: 'New York',
});# Python
event = mnx.records.insert('events', {
'title': 'Doctor appointment',
'date': '2026-03-15',
'time': '10:00',
'location': 'Philadelphia',
})
mnx.records.update('events', event.record_id, {
'time': '14:00',
'location': 'New York',
})3. Query and Search
Records support two retrieval modes: structured queries with JSONB filters, ordering, and pagination — and semantic search powered by pgvector embeddings. Use whichever fits your use case, or combine them.
// Structured query with JSONB filters
const upcoming = await mnx.records.query('events', {
where: { location: 'Philadelphia' },
orderBy: 'date',
limit: 10,
});
// Semantic search — natural language
const results = await mnx.records.search('events', 'medical appointments next month');
// Returns records ranked by similarity scoreQueries use Postgres JSONB containment operators. Semantic search uses cosine similarity on pgvector embeddings.
4. AI-Powered Record Extraction
This is where Records get powerful. When you enable record extraction on your chat endpoint, Mnexium automatically analyzes each user message and determines if a record should be created or updated — without any extra code on your side.
Here's how it works under the hood:
Schema introspection — The pipeline loads all schemas for your project and dynamically generates LLM tool definitions (create_events, update_events, search_records)
Context injection — Any previously recalled records are included in the prompt so the LLM knows what already exists and can reference record_ids for updates
Tool-calling — The LLM decides whether to create, update, search, or do nothing. It can issue multiple tool calls in a single pass
Execution — Tool calls are executed against Postgres with full validation, embedding regeneration, and activity logging
User says: "I have a doctor's appointment March 15th at 10am in Philly"
→ AI automatically calls create_events with:
{ title: "Doctor's appointment", date: "2026-03-15",
time: "10:00", location: "Philadelphia" }
User says: "Actually, move that to 2pm"
→ AI automatically calls update_events with:
{ record_id: "rec_2fe18b47-...", time: "14:00" }No code changes needed — just enable records on your chat endpoint
The extraction pipeline runs as a fire-and-forget background task — it never blocks the chat response.
5. Access Control
Records have built-in ownership and visibility controls. Every record has an owner_id, a visibility setting (public or private), and an optional collaborators list.
Write access (update, delete) is restricted to the owner and collaborators. Read access respects visibility — private records are only visible to the owner and collaborators. System actors (like the AI extraction pipeline) bypass ownership checks, so the AI can always manage records on behalf of users.
// Create a private record with collaborators
const deal = await mnx.records.insert('deals', {
title: 'Acme Renewal',
value: 500000,
stage: 'negotiation',
}, {
ownerId: 'user_alice',
visibility: 'private',
collaborators: ['user_bob', 'user_carol'],
});
// Only alice, bob, and carol can see or update this record
// Pass x-subject-id header to identify the caller6. Cross-References with ref: Types
Schema fields can reference other record types using the ref: prefix. This creates typed foreign-key relationships between records that the system can follow automatically.
// Define a deals schema that references accounts
await mnx.records.defineSchema('deals', {
title: { type: 'string', required: true },
value: { type: 'number' },
stage: { type: 'string' },
account_id: { type: 'ref:accounts', description: 'Linked account' },
});
// When a deal is loaded, related account records
// can be resolved automaticallyReference fields create navigable relationships between record types — like a lightweight relational model.



Replies