Agentmemory - Persistent memory for Claude Code, Codex & coding agents
by•
You can now give Hermes, Claude Code, and Codex infinite memory.
Agentmemory is trending on GitHub with 5,000+ Stars.
CLAUDE md dumps 22,000+ tokens into context at 240 observations
agentmemory: 1,900 tokens. same observations. 92% less.
At 1,000 observations, 80% of your built-in memories become invisible. agentmemory keeps 100% searchable.
benchmarked on 240 real coding sessions
→ Up to 95% fewer tokens per session
→ 200x more tool calls before hitting context limits
→ 100% open source

Replies
Agentmemory
Hey Product Hunt 👋
I built AgentMemory because coding agents still have one painful limitation: they forget between sessions.
You explain your architecture once.
You debug a production issue once.
You decide on a library or pattern once.
Then the next session starts from zero again.
AgentMemory gives AI coding agents persistent memory across sessions, so they can actually build on what they’ve already learned about your codebase. It automatically captures what your agent does, compresses it into structured memories, indexes them with hybrid search, and injects the right context back into future sessions.
It works with Claude Code, Cursor, Codex CLI, Gemini CLI, Windsurf, Kilo Code, OpenCode, Cline, Roo, Goose, Aider, Hermes, OpenClaw, and basically any MCP or REST-capable agent.
From day one, I wanted it to be:
100% open source
Free to run locally
No external database required
Works via MCP, REST, and simple hooks
Built for real coding workflows, not toy “chat history” memory
On benchmarks, AgentMemory gets 95.2% R@5 and 98.6% R@10 on the LongMemEval-S retrieval suite using BM25 + vector search, while cutting context usage by around 92%.
Quick start:
Open: http://localhost:3113
Or try the demo: npx @agentmemory/agentmemory demo
If you live in your coding agents every day, this is for the moment you think: “Wait, I already explained this yesterday.”
Would love feedback from builders, heavy agent users, and open‑source maintainers.
GitHub: https://github.com/rohitg00/agentmemory
@agentmemory @rohit_ghumare How do you think about secrets or sensitive data accidentally entering memory? Is there filtering/redaction built in, or do you recommend teams handle that at the hook/integration layer?
OpenHuman
well done @rohit_ghumare i'd love to know what's the business model you intend to persue? looks like everything is free and opensource. just wondering would u be making this a hobby project or building it seriously or something else?
Agentmemory
OpenHuman
@rohit_ghumare mashallah, i wish you guys more success :D :D we're going to natively integrate this into OH
Agentmemory
Pieces for Developers
Wonderful project. Already used it locally with Claude Code and it provides an amazing developer experience. Absolutely love the underlying architecture powered by iii = very scalable. very efficient and hands down the best memory solution otu there
Agentmemory
Congrats on the launch.
2 questions:
Will this impact more usage on tokens? since the agent need looking around and search on newer chats?
Will the memory be persistent only in CLI agents or also on their desktop application as Codex, Claude, Cursor
Agentmemory
@rohit_ghumare that’s nice, what about 2#
Does it work only on CLIs?
Well done team! How do you detect when a stored memory contradicts current code state or is pruning still manual?
Agentmemory
Persistent memory across sessions is one of those things that sounds like a dev tool problem but actually changes how useful AI agents are in practice. Right now every session with Claude Code starts from scratch — re-explaining context, re-loading preferences. Curious how Agentmemory handles conflicts when the same context gets updated across sessions. Does it merge, overwrite, or flag it for review?
Agentmemory
The cross-session forgetting problem is real. The deeper one you'll hit at scale: when an agent makes a wrong call in week 4 because it remembered a misleading decision from week 1, where does ownership of that mistake sit? Two questions worth thinking about: 1. Can memory be exported in an open format so agents move with their user, not their runtime? 2. Is there a way to mark a memory entry as disputed or superseded? Without those, an agent's persistent memory becomes a liability dressed as a feature.
Agentmemory
Really interesting, can it pick up past sessions or does it start only once i integrate? On another side note is there a way to not use an agentic db and maybe postgres?
Great traction! I will give it a try on my current project and see if it brings down hallucination. I like the graph view so you can easily see whats going on.
Just wondering how long did this take to make? The database side is very interesting and i think has a lot of potential to many other things. Good luck!
Agentmemory
Persistent memory for coding agents is a harder problem than it sounds. You're not just storing conversation history, you're storing codebase context, decisions made, patterns established. The benchmark claim is what I'd want to dig into. Memory that's fast to write is useless if retrieval is noisy. How does it handle context that's become stale after a refactor?
92% token reduction is huge if it holds on real codebases. Curious how agentmemory handles conflicting observations: when newer context contradicts older stored memory, does recency win automatically or is there a manual override?