The legitimate users of my site occasionally hammer the server with API requests that cause undesirable results. I want to institute a limit of no more than say one API call every 5 seconds or n calls per minute (haven't figured out the exact limit yet). I could obviously log every API call in a DB and do the calculation on every request to see if they're over the limit, but all this extra overhead on EVERY request would be defeating the purpose. What are other less resource-intensive methods I could use to institute a limit? I'm using PHP/Apache/Linux, for what it's worth.

6 Answers
6

Ok, there's no way to do what I asked without any writes to the server, but I can at least eliminate logging every single request. One way is by using the "leaky bucket" throttling method, where it only keeps track of the last request ($last_api_request) and a ratio of the number of requests/limit for the time frame ($minute_throttle). The leaky bucket never resets its counter (unlike the Twitter API's throttle which resets every hour), but if the bucket becomes full (user reached the limit), they must wait n seconds for the bucket to empty a little before they can make another request. In other words it's like a rolling limit: if there are previous requests within the time frame, they are slowly leaking out of the bucket; it only restricts you if you fill the bucket.

This code snippet will calculate a new $minute_throttle value on every request. I specified the minute in $minute_throttle because you can add throttles for any time period, such as hourly, daily, etc... although more than one will quickly start to make it confusing for the users.

You can control the rate with the token bucket algorithm, which is comparable to the leaky bucket algorithm. Note that you will have to share the state of the bucket (i.e. the amount of tokens) over processes (or whatever scope you want to control). So you might want to think about locking to avoid race conditions.

I don't know if this thread is still alive or not but I would suggest to keep these statistics in memory cache like memcached. This will reduce the overhead of logging the request to the DB but still serve the purpose.

I agree completely and we implement this way as well as its also atomic. You could use something like AWS elasticache to store them and then have a cronjob just log the aggregated results afterward into a database. We actually have a small memcached instance per server to do incrementing and then flush/increment this to elasticache once a minute - that way you don't move the bottleneck to elasticache either.
– RossApr 22 '13 at 19:08

@Kedar you can still log all the calls in a file for different kinds of analysis', which would not bother your DB , just queuing the writes on disk buffer.
– kommradHomerFeb 21 '14 at 16:10

Would redis be a better solution? It's in ram but also non volatile?
– BeardedGeekApr 19 '15 at 9:29

You say that "all thos extra overhead on EVERY request would be defeating the purpose", but I'm not sure that's correct. Isn't the purpose to prevent hammering of your server? This is probably the way I would implement it, as it really only requires a quick read/write. You could even farm out the API server checks to a different DB/disk if you were worried about the performance.

However, if you want alternatives, you should check out mod_cband, a third-party apache module designed to assist in bandwidth throttling. Despite being primarily for bandwidth limiting, it can throttle based on requests-per-second as well. I've never used it, so I'm not sure what kind of results you'd get. There was another module called mod-throttle as well, but that project appears to be closed now, and was never released for anything above the Apache 1.3 series.

Yeah, I'll probably have to save something on disk.. preferably not every single log request though. I could just save the last successful API request and make sure it's n seconds later than that.
– scottsSep 4 '09 at 21:17