We built Dropstone because we were tired of starting from zero every time we opened an AI coding tool.
Dropstone learns, remembers, and evolves with your projects building a persistent understanding of your codebase, architecture, and workflow. It s designed to grow with you, not reset after every session.
We ve just launched it on Product Hunt and would love your thoughts on how memory should shape the next generation of developer tools. Your feedback will help us refine what s coming next.
Dropstone
Hi Product Hunt! 👋 I’m one of the makers at Blankline (the research lab behind Dropstone).
We noticed a critical problem with tools like Cursor and Claude Code: They are single-player. You code alone. If you get stuck, you paste snippets into Slack. If you're a founder hitting the "70% wall," you're stranded.
Dropstone 3 is the first multiplayer AI workspace. With today's release, we are launching Share Chat:
🔗 One Link: Generate a URL for your local workspace.
⚡ Instant Join: A senior dev, designer, or client joins instantly.
🧠 Shared Brain: Everyone shares the same AI context and live preview.
This is not a wrapper. We are a research lab building proprietary infrastructure:
D3 Engine: Virtualizes context (50:1 compression) for infinite memory.
Horizon Mode: Background agent swarms that fix bugs asynchronously while you sleep.
Research: We publish our papers openly (check blankline.org/research).
We’re live in the comments to answer questions about our compression architecture or the "70% wall." Let us know what you think! 👇
@santosharron Wow, that sounds really good for extreme programming, like era v2.0. And what happened if 2 people start prompting in different directions? Will there be a context drift?
Dropstone
@aleksanadr_lavrinenko We solve that by serializing the context. It’s strictly chronological: whoever interacts first sets the state, and the simultaneous user’s generation will include that previous context instantly. No drift, just a single shared timeline.
That Slack paste loop is brutal. Live Share fixes the editor, but it doesn't share the agent context. Dropstone 3 Share Chat feels like it closes that gap vs Cursor or Claude Code. Does the share link have permissions and secret redaction baked in? If yes, it's a real team tool.
Dropstone
@piroune_balachandran Yes, Dropstone 3 includes robust role-based permissions for Editors and Viewers, along with automated redaction to ensure secrets and sensitive keys stay out of the shared agent context.
@santosharron Horizon Mode is wild.
Lookup
How much compute would it use for spinning up 10k agents?
Dropstone
@ajaykumar1018 That would melt a local machine!
For spinning up 10k agents, we offload 100% of that compute to our Cloud infrastructure, so it doesn't touch your local resources.
Just a heads-up though: that 10k agent capacity is an Enterprise-exclusive feature. The Pro and Teams plans have lower caps since they don't typically need that level of swarm power, but for massive Enterprise deployments, 10k is our current max capacity.
DiffSense
Dropstone
@conduit_design To answer your questions directly:
1. Yes, fully. 'Share Chat' and 'Horizon' run on the Dropstone engine, not the model, so they work perfectly offline. Just keep in mind: if you host a shared session, your machine acts as the server. If a friend joins, your GPU handles the inference for both of you.
2. The Performance Reality You’re right - Cloud models (like Opus 4.6) are 'One-Shot Snipers.' They have massive IQs and handle logic puzzles instantly.
Dropstone + Cloud: If you plug Opus into Dropstone, you actually get better results than standard chat because our self-learning tech adds a layer of precision that raw models lack.
Dropstone + Local: If you want near-cloud performance locally, try using Kimi 2.5. For tasks like a React-to-Next.js migration, it’s the closest we’ve seen a local model get to Opus 4.6 levels of capability.
3. The Bottom Line Cloud wins on raw logic IQ. Dropstone wins on Context. A standard LLM chat sees one file; Dropstone reads the 50 files linked to it. That represents the real difference in how we compete.
Whoa, Dropstone looks incredible! 10,000 agents in one tab is mind-blowing. Super curious how Semantic Entropy Tracking handles ambiguity in edge cases. Congrats on the launch!
Trend based research and the engage feature make a great loop. If SuperX shows the why behind each inspiration pick and timing suggestion, it'll stay reliable even when X shifts the rules. A simple decision log makes it stick.
Share Chat is what caught my eye. Saw Aleksandr's question about context drift and the chronological serialization answer makes sense for keeping things consistent. But I'm wondering about the opposite case - in pair programming you sometimes want to explore two competing approaches before picking one. With a single shared timeline, would you need separate chat sessions for each idea and merge the winner back? Or is there a way to branch the context?
Dropstone
@kxbnb We actually handle this via Granular Checkpointing (think of it like a localized Wayback Machine).
You don't need a separate session to explore a new idea. You can simply click any previous checkpoint and start a new timeline right there. This lets you explore competing approaches from the exact same context point without losing your original trajectory.
Migma AI
The Divergent Trajectory Search concept is wild - simulating 10,000+ futures to find optimal paths is such a different approach from linear AI coding assistants. The fact that Horizon Mode can fix bugs asynchronously while I sleep is honestly game-changing. Quick question: how does Share Chat handle conflicts when multiple people are editing with different AI contexts? Does the D3 Engine's context compression help merge those different trajectories, or do you surface conflicts to let the team decide?
Dropstone
@adam_lab Dropstone uses a customized CRDT (Conflict-free Replicated Data Type) system—similar to Yjs but optimized for AST structures rather than just raw text.
Here is the simple answer for your specific question:
Conflict Handling: It broadcasts operations (e.g., "insert node at index X") rather than replacing full files. If a human and the AI edit the same line simultaneously, the engine prioritizes the Human's keystrokes as the "Truth" state to prevent the AI from overwriting your logic.
D3 Context Merging: Yes, the D3 Engine actively merges trajectories. Because it uses Logic-Regularized Compression (storing logic gates/variable definitions rather than just tokens), it creates a "Shared Brain." If your teammate’s agent fixes a bug, that "Transition Gradient" is instantly available to your agent without you needing to update the context manually.