Launching today

Whisper Internet Infra AI Context
Free MCP for security AI: live BGP, DNS, threat graph
118 followers
Free MCP for security AI: live BGP, DNS, threat graph
118 followers
Whisper Internet Infrastructure AI Context is an MCP server that plugs into Claude or Cursor or any LLM in 2 minutes and gives your agent real-time BGP, DNS, WHOIS and threat-graph context. 46B data points, sub-ms queries, free tier. Founded by ex-RIPE NCC and ICANN engineers.





Hey Product Hunt, Kaveh here, one of the founders at Whisper.
For the past three years, we've been building a graph engine of the internet's infrastructure (46B data points, 39B edges, sub-millisecond queries on real-time BGP and DNS). It started as a research tool for the threat-intel community, but the most interesting consumer turned out to be AI agents.
Today we're shipping it as an MCP server. 2-min install in Claude Desktop or Claude Code or Cursor. Free tier (no credit card).
Why this matters in practice: when your agent has to investigate a domain or an IP, it usually has to call multiple APIs (DNS, BGP, WHOIS, threat feeds) and reason across raw JSON. That burns context. With Whisper, the same answer comes back from one Cypher query. We're seeing meaningful agent-context savings on multi-hop investigations, we'll publish a full benchmark this week.
Three things you can try in your first 5 minutes:
1. Install: https://www.whisper.security/docs/mcp/setup
2. Ask your agent: "Who hosts this domain, who else is on the same prefix, and what changed in the last 24h?" - one round-trip.
3. Run whisper.explain() on any score, full chain of evidence, not a black-box.
I'm here all day to answer questions. Especially curious what investigations you'd throw at it, and what you want us to add next.
Thanks for taking a look.
46B data points is huge. How do you keep query results consistent when things change in real time?
@charlotte_reed1 A few things going on under the hood.
Every read sees a consistent snapshot, so a single query never trips over a half-applied change.
All edges carry first_seen and last_seen timestamps internally. Nothing is overwritten. When something flips (domain switches MX, IP moves to a new ASN), the old edge gets its last_seen stamped and a new one is appended.
Writers (BGP, DNS, all threat feeds, etc.) are pushing continuously. Reads work against a coherent slice. We traded strict ACID for immutable history plus fast appends. The payoff: an agent can ask "what was true when the incident started" rather than only "what is true right now."
Soroush built the fantastic math behind engine updates. Happy to introduce if you want to go deeper on the consistency model.
Curious what the latency looks like when agents run repeated chained ques, not just a single lookup.
@hudson_blake Single lookups sit at sub-millisecond p99 and we've load-tested at 120K queries per second.
For chains specifically, here's the thing: a lot of what looks like a chain in REST world collapses into a single Cypher traversal in our graph. "Find all domains sharing this name server, then their WHOIS owners, then any with threat-feed hits in the last 30 days" is one query for us, not three round trips. So the latency budget you'd normally spend chaining mostly disappears.
Please hammer it. MCP endpoint is free for launch week and we want to see real agent traffic. If anything looks weird, ping me here or DM Soroush.
What kind of queries break first when you move from demo-style ques to real SOC workflows?
@caleb_hunter_guahip A few things show-up in roughly the same order every time.
Temporal joins with multiple axes are first. "Domains using this NS on March 15, that also had abuse reports between March 1 and 30, and a registrar change in that window." Our graph was built for this. Every edge carries first_seen and last_seen internally, and the agent tooling pushes the time predicate to the front of the traversal so the planner has the smallest possible candidate set. Multi-axis time joins stay fast even on tight date ranges.
Next is unbounded fan-out. "Show me everything connected to this IP within 3 hops" looks fine on a clean test domain. On a shared CDN edge or a popular nameserver it returns millions of edges and dies. We built the engine ourselves so we have cover almost all cases that detect and do some very smart handling of huge fan-outs, but I wouldn't call it solved yet. That path needs more real SOC traffic before I trust it under everything.
The one we hear about most from analysts is fusion with internal data: alerts, EDR telemetry, etc. That stuff isn't and should not be in our graph. A few teams and SOCs are running us now and every one wires it up differently, which is the honest reason there's no single answer. We're not going to ingest your internal logs, so the join has to happen client-side, and that's where workflows get unpredictable.
How do you avoid the graph becoming outdated given how fast internet routing and infra changes?
@easton_grant Exactly the challenge we wanted to tackle when building this and that's our magic sauce. My co-founder Soroush has two PhDs in mathematics and has spent his adult life studying information dissipation in large networks. We ingest in real time, sure, but the harder bit is we also push the update through to every affected node the moment it lands. If an ASN gets hijacked on BGP, every domain served by it is flagged at the same instant. Not on the next crawl. Not at end-of-day. Right then.
Real-time ingestion is the easier half. Knowing which downstream nodes are now suspect because one upstream signal flipped is what's actually hard and we have it!
I wonder how much context savings you actually see in long multi-step investigations inside Cursor or Claude.
@dylan_hayes2 Honestly it's huge.
You're not wiring up 20 named REST tools, so your agent doesn't burn turns figuring out which endpoint to call next. Cypher is one endpoint and your LLM already speaks it. And you're not teaching the model what a Domain or an IP or an ASN or an MX record is, because it already knows. Context savings on both the tooling surface and the vocabulary.
In Claude (or any LLM for that matter) that means a five-hop investigation that would normally need 10+ REST calls plus a wall of intermediate JSON becomes one Cypher query and a clean result set. The model spends its context budget reasoning instead of bookkeeping.
Try it on a real investigation. You'll feel the difference within the first pivot.