Rately

Rately

Take control of your API traffic with custom rate limits.

5.0
1 review

133 followers

Enterprise-grade rate limiting service built on Cloudflare. Define rate limits by user ID, API key, or any custom parameter. Drop-in integration with ~25ms latency.
Interactive
Rately gallery image
Rately gallery image
Rately gallery image
Rately gallery image
Free
Launch tags:APISaaSDeveloper Tools
Launch Team
Intercom
Intercom
Startups get 90% off Intercom + 1 year of Fin AI Agent free
Promoted

What do you think? …

Hakan
Maker
📌
Hey everyone 👋 Excited (and a bit nervous) to share Rately with you today. The idea came from a very specific pain: it was surprisingly hard to build custom rate limiting — like “limit by user ID, API key, or some custom field” — without hacking together a ton of messy code or running into performance issues. So we built Rately to make that part simple: • Define limits on any parameter you need (user ID, API key, etc.) • Built on Cloudflare → fast (~25ms latency) • Easy drop-in integration, designed for SaaS APIs & services If you’ve ever fought with rate limiting logic, you’ll know how annoying it can get. I’d love to hear how you’re solving it today and if Rately could make your life easier. Thanks for checking it out 🙏
Vladimir Lugovsky

@hkan looked at the different possibilities of identifying a user - looks really flexible. Good job!

Roozbeh Firoozmand

Finally! This was what we dealt with recently with Clevera. Does it include analytics or just throttling?

Hakan

@roozbehfirouz Hey! 👋 Yep, it includes both. You can use Rately for throttling (pretty flexible on how you want to do it), and it also shows analytics — request, rate limit count. So you get both control and visibility in one place.

Germán Merlo

Hey Hakan! Heheh it's exactly what I'm dealing with cause I'm thinking to launch an API and I was thinking on the best way to set rates. I assume it's gonna work for me!

Hakan

@german_merlo1 Yep, Rately should fit perfectly for that — you can set limits per endpoint or per user, test different rate configs, and see how it all behaves in real time. Makes launching an API a lot less stressful 🚀

Ahmad Bilal

Looks cool. What does the average latency look like?

Hakan

@ahmadbilaldev It's about 20ms. The best part is that it's distributed; the rate limit happens at your client location, way before it reaches you.

Ahmad Bilal

@hkan Is it possible for you to share your average P50 and P90? I have seen random spikes of latency with other similar distributed services.

Hakan

@ahmadbilaldev Hey! 👋 Sure — just checked for you. Right now, the averages over the last 15 minutes are: P50: 24.05 ms and P90: 32.96 ms. Happy to share more if you’re curious! 🚀

Ahmad Bilal

@hkan Impressive.

Jorge Ferreiro

super cool, congrats!