Launching today
SpeedPower.run

SpeedPower.run

The definitive benchmark for modern AI-based webapp

4 followers

Tired of one-tab browser tests? Traditional benchmarks are outdated. SpeedPower.run is the first zero-install tool that stress-tests your device's Maximum Concurrent Compute by saturating all CPU and GPU cores simultaneously. It runs seven parallel, weighted benchmarks: JavaScript, Web Worker Exchange, and five distinct AI inference models (2 from TensorFlow.js and 3 from Transformers). See if your rig is ready for the AI-driven web.
SpeedPower.run gallery image
SpeedPower.run gallery image
SpeedPower.run gallery image
SpeedPower.run gallery image
Free
Launch Team
Intercom
Intercom
Startups get 90% off Intercom + 1 year of Fin AI Agent free
Promoted

What do you think? …

Gilbert Cabillic

Hello Product Hunters,

We just launched SpeedPower.run, a new benchmark we built specifically to address the biggest frustration with current tools: the "Single-Task Fallacy."

You know how modern web apps are a traffic jam of concurrent processes—running a local LLM, juggling a huge JSON payload, and keeping the UI smooth? Most benchmarks only test one thing, which is totally misleading about a device's real-world ability to handle Task Saturation.

SpeedPower.run is engineered to find the absolute maximum potential of a browser/device by pushing the CPU and GPU to their limit. Our score is based on six concurrent, parallel tests:

  • JavaScript: Heavy multi-core processing.

  • Exchange: Measures the critical communication bottleneck between the main thread and Web Workers.

  • Five AI Inference Models: Transformers LLM & Speech, TensorFlow’s Recognition, Classification models from both Transformers and TensorFlow - All running simultaneously to fully saturate WebGPU/compute shaders.

The entire test runs locally (zero network interference after the initial load), and the scoring algorithm is now frozen for fair comparisons.

I'm really keen to see what you get! Check your device's Max Concurrent Compute at SpeedPower.run and let me know your score!

Best,

Gilbert


Long Nghiem

Hey Product Hunt! Team member here.

Working on SpeedPower.run has been an eye-opener for all of us. When we started, we realized that our own dev machines were scoring 'perfectly' on traditional benchmarks, yet they’d still stutter when we actually tried to run a local LLM alongside our data-heavy dashboards.

That disconnect is what we wanted to fix. We didn't just want another 'Engine Speed' test; we wanted to build something that measures System Orchestration.

My favorite part of the tool is the 'Exchange' score. It’s the first time I've been able to quantify exactly how much the overhead of moving data between the main thread and workers actually costs in terms of performance. It turns out, that 'silent' cost is often why an app feels laggy even when the GPU is top-tier.

I’m hanging out in the comments all day with Gilbert to answer the nitty-gritty technical questions—whether it’s about our Transformers.js implementation, how we use the Geometric Mean for scoring, or how we saturation-test the browser scheduler.

Give it a spin and tell us: What’s the biggest bottleneck you found on your machine?

Chris Mayhew

I’ve always felt that traditional benchmarks like Speedometer or JetStream were giving me an 'idealized' version of performance that just doesn't hold up in production. Most tools show you how fast a script runs in a vacuum, but they completely ignore what happens when you actually saturate the device.

Really love the focus on the 'Exchange' score here—it’s that silent handoff between the main thread and workers that usually kills the UX, but no one seems to measure it properly. Seeing it measured alongside Transformers.js and WebGPU tasks makes this feel like a benchmark built for 2026, not 2015.

Also, huge props for keeping this as a Zero Network Interference test; it’s great to know my ISP speed isn't bloating my results.

Definitely sharing this with my frontend team. Great launch, guys!

Gilbert Cabillic

@chrismayhew Exactly! you’ve hit on our core mission. We believe that measuring a browser engine in a vacuum is like testing a car's top speed on a treadmill; it tells you nothing about how it handles a real-world commute.

Regarding the 'Exchange' score: I'm glad that resonated with you. We weighted it heavily because, in our experience, that 'silent' data handoff between threads is exactly where the user experience breaks down. If your data pre-processing or thread communication is slow, your AI inference will starve regardless of how powerful your GPU is.

A few things we’re particularly proud of:

  • Geometric Mean Scoring: We use this specifically so that a high score in one area can't 'hide' a failure in another. It treats the system as a chain that is only as strong as its weakest link.

  • Zero Network Interference: By pre-loading all ~350MB of AI models before the timer starts, we ensure the results are a pure reflection of your device's local compute power.

  • Saturation vs. Speed: We want to reward systems that can manage 'Task Saturation', handling heat and scheduling across every available core simultaneously.

Love that you’re sharing this with your frontend team. If they find any interesting bottlenecks on their specific hardware, we’d love to hear about it!