Sizing up against Amazon S3

Amazon just posted an interesting article about their object growth — upwards of 762 billion objects.

What I found more interesting is that they called out their peak QPS at about 500,000. At AppNexus we don’t store a similar set of objects but we do handle some massive QPS numbers. I decided to size ourselves up, and here are a few things to call out

Every request we process is for *dynamic* content, not *static*. Every time someone calls us we make a dynamic decision on what bid to pay and what ad to serve, and we also read/write a cookie and generally also read/write from an internal key-value store. So our QPS are a lot more expensive.

Every request we process also results in logs that are pushed out for financial billing. I’m guessing Amazon does something similar, but downstream from these requests we have an insane amount of log data to crunch and aggregate.

So here goes:

In the past year or so we went from seeing about 300k QPS to 475k QPS. Holy crap. At our current rates we’ll generate about 6.6 trillion log records this year (and since we’re growing, probably more).