
Byterover
File-based memory for agents with >92% retrieval accuracy
1.2K followers
File-based memory for agents with >92% retrieval accuracy
1.2K followers
ByteRover is a fully local, file-based memory layer for agents with market-best 92.2% retrieval accuracy, that supports cloud portability, and built-in version control. From OpenClaw to Claude Code to Cursor to whatever's next, your own memory travels with you, not trapped in one tool. ByteRover gives your agents stateful memory that keep your context's timeline, facts, and meaning perfectly in place.
This is the 5th launch from Byterover. View more
ByteRover Memory System for OpenClaw
Launching today
Give OpenClaw agents stateful memory that keep your context's timeline, facts, and meaning perfectly in place. ByteRover is a memory layer that gets 26k+ downloads from OpenClaw power users within one week, and delivers a market-best 92.19% retrieval accuracy, local-to-cloud portability, and built-in version control.




Free
Launch Team



Byterover
Hey Product Hunt! 👋
Andy here, founder of ByteRover.
Over the last few months, we’ve watched developers try to scale autonomous agents (like OpenClaw and local Ollama setups) and hit a massive brick wall: Agent Amnesia.
An agent solves a bug or writes a script, and then immediately forgets the context. To fix this, teams are dumping entire codebases into giant vector databases or blindly prepending massive context windows, resulting in insane API token bills and VRAM crashes.
We got tired of these manual workarounds. So we built Memory Skill for OpenClaw.
It is a deterministic, file-based memory system (.brv/context-tree) that lives directly in your local environment.
How it works:
🧠 Selective Retrieval: Instead of blindly injecting everything, ByteRover actively curates decisions and feeds the agent exactly what it needs to know.
📉 Cuts Token Burn: Our users are seeing token usage drop by ~40-70% because the prompts stay noise-free.
📂 Local & Portable: Your memory is version-controlled via Git, preventing silent context drift. What Git did for code, we are doing for AI context.
We’ve seen 26k+ downloads from OpenClaw power users in the last week, hitting a 92.19% retrieval accuracy on the LoCoMo benchmark.
I would love the community’s feedback on our architecture. Drop any questions below I’ll be here all day answering them! 👇
Agent amnesia is the most underrated bottleneck in agentic workflows — an agent that forgets what it just debugged three turns ago is essentially starting from scratch every time. The 40-70% token reduction from selective retrieval instead of blindly injecting everything is a massive cost saving at scale. How does the deterministic file-based approach handle conflicting memories when two team members' agents produce different context about the same codebase section?
Byterover
@svyat_dvoretski Hey Sviatoslav! You hit the nail on the head: amnesia is the final boss of autonomy.
To answer your question about conflicting memories: this is exactly why we chose a structured file system over a raw vector DB. When two agents produce conflicting context, our retrieval engine handles it deterministically rather than probabilistically.
Our composition logic works on a strict hierarchy:
Personal Tree > Project Tree > Team Tree
If an agent sees a conflict between a team-level architectural pattern and a personal-level override for a specific session, the system deterministically favors the closest node (Personal/Project). If there is a direct conflict at the exact same level, we default to the most recent timestamp (updatedAt in the Markdown frontmatter).
Because the memory is just Markdown files, if the conflict persists, a human developer can simply open the .brv/context-tree folder, read the two text files, and manually delete the outdated one—something that is nearly impossible to debug inside a black-box vector database!
Would love to hear how you guys are handling context bloat over at Snippets!
Byterover
@svyat_dvoretski Thanks for asking.
Conflicting memories across agents is a real problem, and brv addresses it at two levels. This work is in progress and will be released real soon. At ByteRover, we deliver weekly and biweekly.
First, branching keeps agent memories isolated by default. Human-in-the-loop enforces a human gate before conflicting writes are finalized. Neither alone is sufficient - branching without review just defers the conflict; review without branching means every write races against every other. Together they give you the same conflict resolution model teams already use for code: isolated branches, explicit integration, human judgment on high-impact changes.
Seeing >92% retrieval accuracy on pure file‑based memory is impressive - especially given the usual latency vs. persistence trade‑off. I’m curious how you keep the index in sync when source files are edited in place; do you rely on a change‑detection layer or periodic re‑embedding?
Byterover
@lliora Great question Liora! This is the exact latency vs. persistence trade-off we spent months tuning.
We do not do periodic re-embedding (that burns way too many tokens and kills local performance). Instead, we rely on a change-detection layer.
Because ByteRover runs as a local daemon, it watches the .brv/context-tree for file-system events. When a user (or an agent) edits a markdown file in place, the daemon instantly catches the diff. We then do a lightweight re-index of just that specific file and update the updatedAt metadata.
This keeps the index perfectly in sync in real-time, with almost zero latency or token overhead. The file system does all the heavy lifting!
Congrats on the launch! I've been using this for the last month to solve what I call "cognitive debt." I was losing about 15 minutes every morning just re-explaining my architecture and past decisions to my coding agents. Vector similarity wasn't cutting it—it would hallucinate or pull the wrong files. Moving to a curated Context Tree (domain→topic→subtopic) completely fixed the amnesia. The fact that the memory is just markdown files makes it so easy to version control and review. It’s like my agent actually "remembers" where we left off.
Byterover
@littlecrando Thank you so much! I absolutely love the term 'cognitive debt.' That is exactly the friction we set out to eliminate.
Spending 15 minutes every morning just re-onboarding your agent to your own codebase completely kills the flow state. It's awesome to hear that the deterministic domain→topic→subtopic hierarchy is keeping the agent locked into your actual architecture instead of hallucinating based on vectors.
Thanks for being an early adopter and testing it over the last month!
Byterover
70% token savings is the real headline here. The MEMORY.md approach works until you hit ~50k tokens of context and your agent starts hallucinating its own history. Context-tree architecture is the right abstraction - hierarchical retrieval instead of dumping everything into the prompt. 26k users in a week tells you people were desperate for this.
Pieces for Developers
I have been using Byterover for a while with Claude Code for memory management with my team at Studio1. And that was a great experience. Having used OpenClaw for last month, I can definitely say the experience wasn't that good. And I am so excited to try out ByteRover with OpenClaw. huge congrats to the team
Byterover
@shivaylamba Thanks so much Shivay! Really appreciate the support.
It's been awesome seeing the Studio1 team use the .brv tree to maintain context across Claude Code sessions. The shift to OpenClaw is exactly why we built the new CLI we realized the memory architecture needed to be completely agnostic of the agent running on top of it.
Let me know how the deterministic retrieval feels with OpenClaw compared to the native vector setup once you get it running!
100% agree the default memory setup can get noisy fast. The win is separating short-term daily logs from curated long-term memory + good retrieval. Less token burn, better continuity, fewer hallucinated “memories”.