Ed

GPU Per Hour - Compare GPU cloud prices across 30+ providers in real-time

by
The same GPU can cost up to 63x more depending on where you rent it. GPU Per Hour tracks real-time pricing across 30+ providers including RunPod, Vast.ai, Lambda Labs, TensorDock, and CoreWeave. → Compare prices instantly across all major GPU clouds → Filter by GPU type, VRAM, and hourly rate → See actual availability, not just listed inventory → Updated daily with live pricing data Built for ML engineers, researchers, and indie hackers who want the best GPU for their budget.

Add a comment

Replies

Best
Ed
Maker
📌

Hey Product Hunt! 👋

I'm Ed, a backend engineer who got tired of mass-comparing GPU prices across 20+ browser tabs. So I built the tool I needed.

GPU Per Hour aggregates real-time pricing from 27+ cloud providers (AWS, RunPod, Lambda Labs, Vast.ai, and more) so you can find the cheapest GPU for your ML workloads in seconds.

The problem: GPU cloud pricing is a mess. Providers bury their rates, use different billing models, and price identical hardware wildly differently.

Some findings that blew my mind:

  • Same V100: $0.05/hr vs $3.06/hr (63x spread)

  • H100s range from $0.80 to $5.95/hr

  • AWS charges 10-16x more than budget providers for equivalent specs

What you can do:

  • Compare 1,800+ GPU configurations side-by-side

  • Filter by GPU model, VRAM, region, and spot vs on-demand

  • Find the best price instantly instead of tab-hopping

Built for ML engineers, researchers, and anyone tired of overpaying for compute.

Would love your feedback—what features or providers should I add next? I'll be here all day answering questions.

👉 Try it: https://gpuperhour.com

Thanks for checking it out! 🙏

mostafa kh
💡 Bright idea

this is super useful , price difference for the same gpu is insane. exactly the kind of tool i'd use before spinning up any instance.

quick suggestions: filter by price range would be really helpful

great work!

Ed
Maker

@topfuelauto Thank you for the feedback! Price range filter is a good idea, adding it to the roadmap!

Easy Tools Dev

The 63x price spread for the same V100 is absolutely wild - that kind of opacity in GPU pricing has been a pain point forever. I've definitely overpaid on AWS out of convenience when cheaper options existed. Aggregating 1,800+ configurations with real-time availability (not just listed inventory) is the key differentiator here. Quick question: does the pricing update reflect spot instance availability volatility, or is it more focused on stable on-demand pricing across providers?