Radoslav Tsvetkov

Memora - Stop AI from hallucinating your own notes

Memora indexes your Obsidian vault into a claim graph. AI tools cite claim IDs, a validator re-reads source spans before output ships, hallucinated citations get rejected. Surfaces decisions, contradictions, and stale dependencies across notes. Open source.

Add a comment

Replies

Best
Radoslav Tsvetkov
Hey everyone, I built Memora because I got tired of AI note tools fabricating things from my own notes. The problem: chunk-based RAG retrieves text and trusts the model to quote it correctly. When the model fabricates a meeting, puts words in someone's mouth, or cites a paragraph that doesn't exist, you have no defense. Memora's approach: extract atomic claims from your markdown with byte-level pointers and blake3 fingerprints. When an LLM cites a claim ID, a validator re-reads the source span and rejects mismatches before the answer ships. The hallucination contract is enforced in Rust types, not prompts. Beyond verification, the claim graph surfaces patterns chunk-RAG can't: decisions you made three months ago, contradictions between meeting notes, plans that depend on reversed decisions. Single Rust binary. Works with Claude Desktop over MCP. Apache-2.0. Honest about what's not yet there (mobile, PDFs, GUI). Would love feedback from anyone running their notes through this.