LLM Memory with RAG... what's your take?
by•
Trying to solve a problem: LLMs forget everything between conversations.
I keep re-researching the same API docs, competitor info, and company knowledge.
Tokens add up... Time wasted...
What are people actually using?
I see a few approaches:
RAG setups
Vector databases
Just saving to Notion/Obsidian manually
We built something that lets Claude save research to collections, then query them later.
Made a dedicated page for this:
How to persist research of your LLM.
But genuinely curious... what's working in production?
Are people building custom RAG pipelines? Using existing tools? Just dealing with the amnesia? What's your stack look like?
35 views



Replies