
Byterover
File-based memory for agents with >92% retrieval accuracy
1.1K followers
File-based memory for agents with >92% retrieval accuracy
1.1K followers
ByteRover is a fully local, file-based memory layer for agents with market-best 92.2% retrieval accuracy, that supports cloud portability, and built-in version control. From OpenClaw to Claude Code to Cursor to whatever's next, your own memory travels with you, not trapped in one tool. ByteRover gives your agents stateful memory that keep your context's timeline, facts, and meaning perfectly in place.
This is the 5th launch from Byterover. View more
ByteRover Memory System for OpenClaw
Launching today
Give OpenClaw agents stateful memory that keep your context's timeline, facts, and meaning perfectly in place. ByteRover is a memory layer that gets 26k+ downloads from OpenClaw power users within one week, and delivers a market-best 92.19% retrieval accuracy, local-to-cloud portability, and built-in version control.




Free
Launch Team



Byterover
Hey Product Hunt! 👋
Andy here, founder of ByteRover.
Over the last few months, we’ve watched developers try to scale autonomous agents (like OpenClaw and local Ollama setups) and hit a massive brick wall: Agent Amnesia.
An agent solves a bug or writes a script, and then immediately forgets the context. To fix this, teams are dumping entire codebases into giant vector databases or blindly prepending massive context windows, resulting in insane API token bills and VRAM crashes.
We got tired of these manual workarounds. So we built Memory Skill for OpenClaw.
It is a deterministic, file-based memory system (.brv/context-tree) that lives directly in your local environment.
How it works:
🧠 Selective Retrieval: Instead of blindly injecting everything, ByteRover actively curates decisions and feeds the agent exactly what it needs to know.
📉 Cuts Token Burn: Our users are seeing token usage drop by ~40-70% because the prompts stay noise-free.
📂 Local & Portable: Your memory is version-controlled via Git, preventing silent context drift. What Git did for code, we are doing for AI context.
We’ve seen 26k+ downloads from OpenClaw power users in the last week, hitting a 92.19% retrieval accuracy on the LoCoMo benchmark.
I would love the community’s feedback on our architecture. Drop any questions below I’ll be here all day answering them! 👇
Agent amnesia is the most underrated bottleneck in agentic workflows — an agent that forgets what it just debugged three turns ago is essentially starting from scratch every time. The 40-70% token reduction from selective retrieval instead of blindly injecting everything is a massive cost saving at scale. How does the deterministic file-based approach handle conflicting memories when two team members' agents produce different context about the same codebase section?
Byterover
@svyat_dvoretski Hey Sviatoslav! You hit the nail on the head: amnesia is the final boss of autonomy.
To answer your question about conflicting memories: this is exactly why we chose a structured file system over a raw vector DB. When two agents produce conflicting context, our retrieval engine handles it deterministically rather than probabilistically.
Our composition logic works on a strict hierarchy:
Personal Tree > Project Tree > Team Tree
If an agent sees a conflict between a team-level architectural pattern and a personal-level override for a specific session, the system deterministically favors the closest node (Personal/Project). If there is a direct conflict at the exact same level, we default to the most recent timestamp (updatedAt in the Markdown frontmatter).
Because the memory is just Markdown files, if the conflict persists, a human developer can simply open the .brv/context-tree folder, read the two text files, and manually delete the outdated one—something that is nearly impossible to debug inside a black-box vector database!
Would love to hear how you guys are handling context bloat over at Snippets!
Byterover
@svyat_dvoretski Thanks for asking.
Conflicting memories across agents is a real problem, and brv addresses it at two levels. This work is in progress and will be released real soon. At ByteRover, we deliver weekly and biweekly.
First, branching keeps agent memories isolated by default. Human-in-the-loop enforces a human gate before conflicting writes are finalized. Neither alone is sufficient - branching without review just defers the conflict; review without branching means every write races against every other. Together they give you the same conflict resolution model teams already use for code: isolated branches, explicit integration, human judgment on high-impact changes.
Byterover
70% token savings is the real headline here. The MEMORY.md approach works until you hit ~50k tokens of context and your agent starts hallucinating its own history. Context-tree architecture is the right abstraction - hierarchical retrieval instead of dumping everything into the prompt. 26k users in a week tells you people were desperate for this.
Pieces for Developers
I have been using Byterover for a while with Claude Code for memory management with my team at Studio1. And that was a great experience. Having used OpenClaw for last month, I can definitely say the experience wasn't that good. And I am so excited to try out ByteRover with OpenClaw. huge congrats to the team
Byterover
@shivaylamba Thanks so much Shivay! Really appreciate the support.
It's been awesome seeing the Studio1 team use the .brv tree to maintain context across Claude Code sessions. The shift to OpenClaw is exactly why we built the new CLI we realized the memory architecture needed to be completely agnostic of the agent running on top of it.
Let me know how the deterministic retrieval feels with OpenClaw compared to the native vector setup once you get it running!
Byterover
The idea of a free, local version with no friction (no accounts required) really motivates me to try our the CLI.