
Grov
Shared and synchronized AI memory + reasoning across teams
156 followers
Shared and synchronized AI memory + reasoning across teams
156 followers
Most AI coding tools are single-player. Your AI forgets everything when the session ends, forcing every dev to waste time on redundant exploration. Grov turns your AI coding agent into a collective team brain. It captures the reasoning behind every solution and syncs it to your entire team. If one dev’s AI figures out your auth system, everyone’s AI knows it instantly. Currently works with claude code, expansion: codex, gemini Later: cursor








Grov
Oh that is cool! How do you keep context window and actually "sync" across players? Also, how many people max?
Grov
@tade_odunlami Thank you! Appreciate it :)
Context window: Grov tracks token usage. When you hit ~90% capacity, it auto-summarizes your progress, clears the conversation, and re-injects the summary + team memories. So you never lose important context.
Sync: When a task completes, Grov extracts the reasoning and syncs it to your team. When any teammate starts a new Claude session, relevant memories get auto-injected into their context - Claude just "knows" what the team already figured out.
For max people, we currently do not have a “max” on team members, but we recommend a 3-5 person team
Grov
Grov
@vouchy Also regarding what Tony said, maintaining these docs is also a huge pain. If I let them get too long, they eat up the context window, wasting time and tokens.
Another frustrating this is that Claude’s /compact often missed details after long planning sessions and switching from ideas back in forth, so I had to do the standard workaround everybody does: manually create a .md file, start a new chat, and make it read it again. That’s at least 3 extra steps every time just to keep context.
Auto-Hashtag API
BRILLIANT concept! Going from free for 3 > $100 for four is steep. Play with pricing?
Grov
@osakasaul Honestly, we're still figuring it out based on feedback exactly like this. If you've got a team of 4, here is my email: stef@grov.dev I'll get you set up for free.
Swytchcode
Really cool. But does it also support chunking and embedding to be provided to the next agent for the team member?
Grov
@chilarai Yes! When a memory is synced, we generate embeddings (OpenAI text-embedding-3-small) from the goal, reasoning trace, and decisions. When a teammate starts a session, we do hybrid search (semantic + keyword) to find the top 5 most relevant memories and inject them into their context.
Wow, Grov looks amazing! The team brain concept for AI coding is a game changer. Curious if the synchronization accounts for differing code styles across devs?
Grov
@jaydev13 Thanks so much! Great question.
Grov actually operates at a higher level than code style, it captures reasoning, architectural decisions, and constraints rather than formatting preferences.
So if Dev A discovers "we use OAuth-only, rate limit is 100/min" while debugging auth, that knowledge syncs to the whole team. But code style (tabs vs spaces, semicolons, etc.) stays with your existing tools like ESLint/Prettier configs.
This is intentional, we focus on the "why did we build it this way" knowledge that's usually trapped in one dev's head (or lost when they switch contexts).
@tonyystef this directly hits a real pain point we see in dev teams. A few questions: (1) How do you handle versioning when the codebase evolves but team memories stay static? (2) What's your token efficiency gains in practice - are you seeing 30-40% reduction in redundant exploration?
Grov
@imraju Appreciate it! Great questions,
(1) Versioning: We're actively working on this. Right now memories persist until deleted - we store files_touched and linked_commit but don't auto-invalidate when files change. It's under active development.
(2) Efficiency: We've seen tasks go from 10+ min, 3+ subagents used & ~10 files read by the main model to 1-2 min, 0 subagents launched and 0 to ~3-4 files read when context is available (measured by avoiding re-exploration). Haven't formalized token reduction metrics yet. Would love to hear what you're seeing if you try it.