Launched this week

Web Speed
Kill the 'Token Tax.' 90% cheaper agents.
126 followers
Kill the 'Token Tax.' 90% cheaper agents.
126 followers
Web Speed is the logic layer for web agents. Translate any website into high-fidelity, token-efficient machine maps for AI agents. Web Speed can save agents 70%-90% on token costs when navigating the web while running faster and more reliably because of its deterministic mapping engine.




Free Options
Launch Team / Built With



Web Speed
The 'token tax' framing is spot-on. DOM-to-JSON conversion sounds straightforward but the devil is in how you handle dynamic content, SPAs with lazy-loaded sections, and sites that actively block automated access. How does Web Speed deal with pages that render heavily client-side, where the initial DOM is basically empty? That's usually where these mapping layers fall apart.
@christian_knaut That would be the SDK.
Web Speed
@christian_knaut Hi there, here are a few ways that we deal with the issues that you described.
1. Handling Client-Side Rendering (CSR) & SPAs
Web Speed doesn't just scrape raw HTML. When you use interpret_page(js=true) or
evaluate(), it spins up a full Playwright-driven browser engine.
- Hydration Wait: It executes the site's JavaScript, waits for the application to mount,
and only then performs the mapping.
- State Awareness: Tools like wait_for_element and wait_for_url allow the agent to pause
until the client-side router has finished loading the specific view.
2. Bypassing Bot Detection
Standard scraping libraries often fail because they use "clean" environments. Web Speed
allows the agent to attach to your real browser (via CDP):
- Real Fingerprints: It inherits your active sessions, cookies, and hardware
fingerprint.
- Human-Like Interaction: fill_field(use_keyboard=true) simulates actual keystrokes
rather than just setting a .value, which bypasses many "trusted input" checks used by
modern anti-bot layers (like those on X or Amazon).
3. Lazy-Loading & Dynamic Sections
For infinite-scroll or lazy-loaded content, Web Speed uses the Agent Verification Loop:
- The agent can use evaluate() to scroll the page or trigger custom events
(dispatch('scroll')).
- It then re-calls read_page to capture the newly injected nodes, ensuring the "map"
stays updated with the dynamic state of the application.
Please let me know if you have any other questions.
@dominic_pi_dunyer , thank you for your very detailed and helpful reply. I really appreciate it.
do u have any testing benchmark on 70%-90% cost reductions?
Web Speed
@zabbar Yep, we have run many tests and the anonymized results are on our website under the 'Benchmarks' page. Hope this helps.
@dominic_pi_dunyer The 'token tax' framing is accurate — web agents today waste most of their context window just parsing messy HTML. Curious how Web Speed handles heavily client-side rendered pages where the initial DOM is almost empty. That's usually where mapping layers like this break down first.
The deterministic mapping engine is the interesting bet here where most browser agents fail because the DOM changes and they lose their place. Does Web Speed handle dynamic content like infinite scroll or modals that load after the initial page render, or is it optimized for static page structures?
Token costs are the silent killer for anyone building anything agent-based. 90% reduction is a bold claim but if it holds up even partially this changes the economics completely. Want to understand what the actual tradeoff is — latency? context length?
Interesting Infrastructure direction. Most web agents spend surprising amount of effort dealing with inconsistent page structure instead of actual reasoning . does the extraction layer stay reliable across highly dynamic or JS-heavy Websites?