Enterprise-grade rate limiting service built on Cloudflare. Define rate limits by user ID, API key, or any custom parameter. Drop-in integration with ~25ms latency.
Hey everyone 👋
Excited (and a bit nervous) to share Rately with you today. The idea came from a very specific pain: it was surprisingly hard to build custom rate limiting — like “limit by user ID, API key, or some custom field” — without hacking together a ton of messy code or running into performance issues.
So we built Rately to make that part simple:
• Define limits on any parameter you need (user ID, API key, etc.)
• Built on Cloudflare → fast (~25ms latency)
• Easy drop-in integration, designed for SaaS APIs & services
If you’ve ever fought with rate limiting logic, you’ll know how annoying it can get. I’d love to hear how you’re solving it today and if Rately could make your life easier.
Thanks for checking it out 🙏
@roozbehfirouz Hey! 👋 Yep, it includes both. You can use Rately for throttling (pretty flexible on how you want to do it), and it also shows analytics — request, rate limit count. So you get both control and visibility in one place.
Report
Hey Hakan! Heheh it's exactly what I'm dealing with cause I'm thinking to launch an API and I was thinking on the best way to set rates. I assume it's gonna work for me!
@german_merlo1 Yep, Rately should fit perfectly for that — you can set limits per endpoint or per user, test different rate configs, and see how it all behaves in real time. Makes launching an API a lot less stressful 🚀
@ahmadbilaldev Hey! 👋 Sure — just checked for you. Right now, the averages over the last 15 minutes are: P50: 24.05 ms and P90: 32.96 ms. Happy to share more if you’re curious! 🚀
Rately
UI Bakery
@hkan looked at the different possibilities of identifying a user - looks really flexible. Good job!
Sellkit
Finally! This was what we dealt with recently with Clevera. Does it include analytics or just throttling?
Rately
@roozbehfirouz Hey! 👋 Yep, it includes both. You can use Rately for throttling (pretty flexible on how you want to do it), and it also shows analytics — request, rate limit count. So you get both control and visibility in one place.
Hey Hakan! Heheh it's exactly what I'm dealing with cause I'm thinking to launch an API and I was thinking on the best way to set rates. I assume it's gonna work for me!
Rately
@german_merlo1 Yep, Rately should fit perfectly for that — you can set limits per endpoint or per user, test different rate configs, and see how it all behaves in real time. Makes launching an API a lot less stressful 🚀
LangUI
Looks cool. What does the average latency look like?
Rately
@ahmadbilaldev It's about 20ms. The best part is that it's distributed; the rate limit happens at your client location, way before it reaches you.
LangUI
@hkan Is it possible for you to share your average P50 and P90? I have seen random spikes of latency with other similar distributed services.
Rately
@ahmadbilaldev Hey! 👋 Sure — just checked for you. Right now, the averages over the last 15 minutes are: P50: 24.05 ms and P90: 32.96 ms. Happy to share more if you’re curious! 🚀
LangUI
@hkan Impressive.
SMASHSEND
super cool, congrats!