Grov

Grov

Shared and synchronized AI memory + reasoning across teams

156 followers

Most AI coding tools are single-player. Your AI forgets everything when the session ends, forcing every dev to waste time on redundant exploration. Grov turns your AI coding agent into a collective team brain. It captures the reasoning behind every solution and syncs it to your entire team. If one dev’s AI figures out your auth system, everyone’s AI knows it instantly. Currently works with claude code, expansion: codex, gemini Later: cursor
Grov gallery image
Grov gallery image
Grov gallery image
Grov gallery image
Free
Launch Team / Built With
Vy - Cross platform AI agent
Vy - Cross platform AI agent
AI agent that uses your computer, cross platform, no APIs
Promoted

What do you think? …

tony
Maker
📌
Hi everyone! I started building Grov because my co-founder and I were struggling to keep documentation in sync with our development speed. In an era where AI tools let you 2x your codebase in days, maintaining docs is a nightmare. We realized the core problem: while our code was shared in Git, our AI's learnings were trapped in private chat logs. Even when you feed your agent documentation, you still waste time and tokens waiting for it to re-explore the codebase or re-learn architectural decisions it should already know. With Grov, we want to help teams: 1. Ship faster by skipping the "exploration" phase 2. Write fewer docs (Grov captures the "why" automatically) 3. Save tokens by stopping redundant work Looking forward to your feedback! I’d love to know: Does this solve a pain point for your team? What features are we missing?"
Tade Odunlami

Oh that is cool! How do you keep context window and actually "sync" across players? Also, how many people max?

tony

@tade_odunlami Thank you! Appreciate it :)

Context window: Grov tracks token usage. When you hit ~90% capacity, it auto-summarizes your progress, clears the conversation, and re-injects the summary + team memories. So you never lose important context.

Sync: When a task completes, Grov extracts the reasoning and syncs it to your team. When any teammate starts a new Claude session, relevant memories get auto-injected into their context - Claude just "knows" what the team already figured out.

For max people, we currently do not have a “max” on team members, but we recommend a 3-5 person team

Van de Vouchy
Hey Tony, your AI’s learnings trapped in private chat logs that hit me. It’s like starting every session with amnesia. Was there a specific moment where that really stung?
tony
@vouchy Hey! Yes, there were multiple. For example, during a recent hackathon, my cofounder and I built a voice AI + live KG for therapists. It was really annoying having to make the coding agent read a lot of docs from my teammate, then reread files, re-explain how things work and why, etc. Even while building Grov, after a long planning session with Claude, I have to manually push .md files to Git just so my cofounder’s agent can access them. Starting a new session means re-ingesting everything. Decisions and insights get trapped in individual chats, a friction I’ve felt from hackathons to building actual projects. Specifics moments are especially when I’m planning a comprehensive feature to add/I’ve fixed an annoying bug and then starting next session my Claude has to reinvest everything or re explore everything and I have to explain what why and how. That’s honestly how i got the idea for Grov as well.
Marian Diaconescu

@vouchy Also regarding what Tony said, maintaining these docs is also a huge pain. If I let them get too long, they eat up the context window, wasting time and tokens.

Another frustrating this is that Claude’s /compact often missed details after long planning sessions and switching from ideas back in forth, so I had to do the standard workaround everybody does: manually create a .md file, start a new chat, and make it read it again. That’s at least 3 extra steps every time just to keep context.

Saul Fleischman

BRILLIANT concept! Going from free for 3 > $100 for four is steep. Play with pricing?

tony

@osakasaul Honestly, we're still figuring it out based on feedback exactly like this. If you've got a team of 4, here is my email: stef@grov.dev I'll get you set up for free.

Chilarai M

Really cool. But does it also support chunking and embedding to be provided to the next agent for the team member?

tony

@chilarai Yes! When a memory is synced, we generate embeddings (OpenAI text-embedding-3-small) from the goal, reasoning trace, and decisions. When a teammate starts a session, we do hybrid search (semantic + keyword) to find the top 5 most relevant memories and inject them into their context.

Jay Dev

Wow, Grov looks amazing! The team brain concept for AI coding is a game changer. Curious if the synchronization accounts for differing code styles across devs?

tony

@jaydev13 Thanks so much! Great question.

Grov actually operates at a higher level than code style, it captures reasoning, architectural decisions, and constraints rather than formatting preferences.

So if Dev A discovers "we use OAuth-only, rate limit is 100/min" while debugging auth, that knowledge syncs to the whole team. But code style (tabs vs spaces, semicolons, etc.) stays with your existing tools like ESLint/Prettier configs.

This is intentional, we focus on the "why did we build it this way" knowledge that's usually trapped in one dev's head (or lost when they switch contexts).

Raju Singh

@tonyystef this directly hits a real pain point we see in dev teams. A few questions: (1) How do you handle versioning when the codebase evolves but team memories stay static? (2) What's your token efficiency gains in practice - are you seeing 30-40% reduction in redundant exploration?

tony

@imraju Appreciate it! Great questions,

(1) Versioning: We're actively working on this. Right now memories persist until deleted - we store files_touched and linked_commit but don't auto-invalidate when files change. It's under active development.


(2) Efficiency: We've seen tasks go from 10+ min, 3+ subagents used & ~10 files read by the main model to 1-2 min, 0 subagents launched and 0 to ~3-4 files read when context is available (measured by avoiding re-exploration). Haven't formalized token reduction metrics yet. Would love to hear what you're seeing if you try it.

12
Next
Last