Launching today

Tenure
Local AI memory that knows what you chose and why
13 followers
Local AI memory that knows what you chose and why
13 followers
Most LLM memory systems store facts. Tenure stores what to do with them. Every belief carries a why_it_matters field that converts observations into instructions the model acts on directly, no additional inference required. Precision retrieval over similarity search. Fully local, encrypted at rest, nothing leaves localhost. Every belief is visible, editable, and correctable, the model isn't building a hidden profile. Retrieval claims are documented in a reproducible arXiv paper.
Products used by Tenure
Explore the tech stack and tools that power Tenure. See what products Tenure uses for development, design, marketing, analytics, and more.
Engineering & Development 1
Engineering & Development 1

MongoDBThe database for modern applications.
5.0 (61 reviews)
Atlas Search gave us full-text search with fuzzy matching, prefix guards, and scoring in the same database as the belief store. No separate search infrastructure, no sync lag, no operational overhead. For a local-first tool that ships as a single Docker Compose file, adding a separate Elasticsearch or Typesense container would have undermined the zero-configuration promise. MongoDB let us keep the entire stack: persistence, encryption, and search all in one place.
Client-Side Field Level Encryption was the other deciding factor. Belief content is sensitive, it's a structured model of how someone thinks and works. CSFLE let us encrypt belief content at the field level before it reaches the database, which means even if someone gets access to the raw database files they can't read the beliefs. That's a hard requirement for a privacy-first local tool and MongoDB was the most straightforward path to it without building custom encryption middleware.
LLMs 1
LLMs 1

Claude by AnthropicA family of foundational AI models
5.0 (788 reviews)
Structured output reliability. Belief extraction requires the model to write back a precise JSON sidecar on every turn without fail. Smaller or less capable models produce inconsistent structured output at scale; one malformed extraction corrupts the belief store. Claude's instruction-following at the structured output level was simply more reliable in testing than the alternatives we evaluated.
