Dropstone

Dropstone

The Recursive Swarm IDE. 10,000 Agents in one tab.

5.0
2 reviews

530 followers

Dropstone v3 introduces Horizon Mode, a recursive swarm architecture that breaks the "Linearity Barrier" in AI coding. Powered by the D3 Engine, it replaces linear token prediction with Divergent Trajectory Search—simulating 10,000+ potential futures to prune errors before they happen. Features Semantic Entropy Tracking for hallucination detection, Flash-Gated Consensus, and a Neuro-Symbolic Runtime that separates reasoning from retention.
This is the 4th launch from Dropstone. View more
Dropstone 3

Dropstone 3

Launching today
The first multiplayer AI code editor. Now with Share Chat.
Dropstone is the first multiplayer AI workspace. v3.0.5 adds Share Chat: send a link to code with humans & agents in real-time. Features infinite context (D3 Engine), persistent memory & background swarms. Built on original research, not a wrapper.
Dropstone 3 gallery image
Dropstone 3 gallery image
Dropstone 3 gallery image
Dropstone 3 gallery image
Dropstone 3 gallery image
Free
Launch Team
Webflow | AI site builder
Webflow | AI site builder
Start fast. Build right.
Promoted

What do you think? …

Santosh Arron

Hi Product Hunt! 👋 I’m one of the makers at Blankline (the research lab behind Dropstone).

We noticed a critical problem with tools like Cursor and Claude Code: They are single-player. You code alone. If you get stuck, you paste snippets into Slack. If you're a founder hitting the "70% wall," you're stranded.

Dropstone 3 is the first multiplayer AI workspace. With today's release, we are launching Share Chat:

  • 🔗 One Link: Generate a URL for your local workspace.

  • Instant Join: A senior dev, designer, or client joins instantly.

  • 🧠 Shared Brain: Everyone shares the same AI context and live preview.

This is not a wrapper. We are a research lab building proprietary infrastructure:

  • D3 Engine: Virtualizes context (50:1 compression) for infinite memory.

  • Horizon Mode: Background agent swarms that fix bugs asynchronously while you sleep.

  • Research: We publish our papers openly (check blankline.org/research).

We’re live in the comments to answer questions about our compression architecture or the "70% wall." Let us know what you think! 👇

Aleksandr Lavrinenko

@santosharron Wow, that sounds really good for extreme programming, like era v2.0. And what happened if 2 people start prompting in different directions? Will there be a context drift?

Santosh Arron

@aleksanadr_lavrinenko We solve that by serializing the context. It’s strictly chronological: whoever interacts first sets the state, and the simultaneous user’s generation will include that previous context instantly. No drift, just a single shared timeline.

Piroune Balachandran

That Slack paste loop is brutal. Live Share fixes the editor, but it doesn't share the agent context. Dropstone 3 Share Chat feels like it closes that gap vs Cursor or Claude Code. Does the share link have permissions and secret redaction baked in? If yes, it's a real team tool.

Santosh Arron

@piroune_balachandran Yes, Dropstone 3 includes robust role-based permissions for Editors and Viewers, along with automated redaction to ensure secrets and sensitive keys stay out of the shared agent context.

Piroune Balachandran

@santosharron Horizon Mode is wild.

Malte Prüser
💎 Pixel perfection

Wow, this looks sick! Congrats on the launch team

Santosh Arron

@maltepruser Thanks mate! Really appreciate it.

André J
💎 Pixel perfection
Local model support is interesting. How does it perform with ollama only? Does all features work with local mode? Lets say i want to convert a mini app in react to nextjs code. How Would it perform nxt to doing it in claude or codex with opus 4.6? Do you have any head to head videos like that? There are alot of gd arguments on the landing page But at the end of the day. How it competes is what matters.
Santosh Arron

@conduit_design To answer your questions directly:

1. Yes, fully. 'Share Chat' and 'Horizon' run on the Dropstone engine, not the model, so they work perfectly offline. Just keep in mind: if you host a shared session, your machine acts as the server. If a friend joins, your GPU handles the inference for both of you.

2. The Performance Reality You’re right - Cloud models (like Opus 4.6) are 'One-Shot Snipers.' They have massive IQs and handle logic puzzles instantly.

  • Dropstone + Cloud: If you plug Opus into Dropstone, you actually get better results than standard chat because our self-learning tech adds a layer of precision that raw models lack.

  • Dropstone + Local: If you want near-cloud performance locally, try using Kimi 2.5. For tasks like a React-to-Next.js migration, it’s the closest we’ve seen a local model get to Opus 4.6 levels of capability.

3. The Bottom Line Cloud wins on raw logic IQ. Dropstone wins on Context. A standard LLM chat sees one file; Dropstone reads the 50 files linked to it. That represents the real difference in how we compete.

Jay Dev

Whoa, Dropstone looks incredible! 10,000 agents in one tab is mind-blowing. Super curious how Semantic Entropy Tracking handles ambiguity in edge cases. Congrats on the launch!

Piroune Balachandran

Trend based research and the engage feature make a great loop. If SuperX shows the why behind each inspiration pick and timing suggestion, it'll stay reliable even when X shifts the rules. A simple decision log makes it stick.

kxbnb

Share Chat is what caught my eye. Saw Aleksandr's question about context drift and the chronological serialization answer makes sense for keeping things consistent. But I'm wondering about the opposite case - in pair programming you sometimes want to explore two competing approaches before picking one. With a single shared timeline, would you need separate chat sessions for each idea and merge the winner back? Or is there a way to branch the context?

Santosh Arron

@kxbnb We actually handle this via Granular Checkpointing (think of it like a localized Wayback Machine).

You don't need a separate session to explore a new idea. You can simply click any previous checkpoint and start a new timeline right there. This lets you explore competing approaches from the exact same context point without losing your original trajectory.

Adam Lababidi
💎 Pixel perfection

The Divergent Trajectory Search concept is wild - simulating 10,000+ futures to find optimal paths is such a different approach from linear AI coding assistants. The fact that Horizon Mode can fix bugs asynchronously while I sleep is honestly game-changing. Quick question: how does Share Chat handle conflicts when multiple people are editing with different AI contexts? Does the D3 Engine's context compression help merge those different trajectories, or do you surface conflicts to let the team decide?

Santosh Arron

@adam_lab Dropstone uses a customized CRDT (Conflict-free Replicated Data Type) system—similar to Yjs but optimized for AST structures rather than just raw text.

Here is the simple answer for your specific question:

  • Conflict Handling: It broadcasts operations (e.g., "insert node at index X") rather than replacing full files. If a human and the AI edit the same line simultaneously, the engine prioritizes the Human's keystrokes as the "Truth" state to prevent the AI from overwriting your logic.

  • D3 Context Merging: Yes, the D3 Engine actively merges trajectories. Because it uses Logic-Regularized Compression (storing logic gates/variable definitions rather than just tokens), it creates a "Shared Brain." If your teammate’s agent fixes a bug, that "Transition Gradient" is instantly available to your agent without you needing to update the context manually.

12
Next
Last