In this post I’d like to test limits of python aiohttp and check its performance in terms of requests per minute. Everyone knows that asynchronous code performs better when applied to network operations, but it’s still interesting to check this assumption and understand how exactly it is better and why it’s is better. I’m going to check it by trying to make 1 million requests with aiohttp client. How many requests per minute will aiohttp make? What kind of exceptions and crashes can you expect when you try to make such volume of requests with very primitive scripts? What are main gotchas that you need to think about when trying to make such volume of requests?

Hello asyncio/aiohttp

Async programming is not easy. It’s not easy because using callbacks and thinking in terms of events and event handlers requires more effort than usual synchronous programming. But it is also difficult because asyncio is still relatively new and there are few blog posts, tutorials about it. Official docs are very terse and contain only basic examples. There are some Stack Overflow questions but not that many only 410 as of time of writing (compare with 2 585 questions tagged “twisted” ) There are couple of nice blog posts and articles about asyncio over there such as this , that , that or perhaps even this or this .

To make it easier let’s start with the basics – simple HTTP hello world – just making GET and fetching one single HTTP response.

hmm looks like I had to write lots of code for such a basic task… There is “async def” and “async with” and two “awaits” here. It seems really confusing at first sight, let’s try to explain it then.

You make your function asynchronous by using async keyword before function definition and using await keyword. There are actually two asynchronous operations that our hello() function performs. First it fetches response asynchronously, then it reads response body in asynchronous manner.

Aiohttp recommends to use ClientSession as primary interface to make requests. ClientSession allows you to store cookies between requests and keeps objects that are common for all requests (event loop, connection and other things). Session needs to be closed after using it, and closing session is another asynchronous operation, this is why you need async with every time you deal with sessions.

After you open client session you can use it to make requests. This is where another asynchronous operation starts, downloading request. Just as in case of client sessions responses must be closed explicitly, and context manager’s with statement ensures it will be closed properly in all circumstances.

To start your program you need to run it in event loop, so you need to create instance of asyncio loop and put task into this loop.

It all does sound bit difficult but it’s not that complex and looks logical if you spend some time trying to understand it.

Fetch multiple urls

Now let’s try to do something more interesting, fetching multiple urls one after another. With synchronous code you would do just:

forurlinurls:print(requests.get(url).text)

This is really quick and easy, async will not be that easy, so you should always consider if something more complex is actually necessary for your needs. If your app works nice with synchronous code maybe there is no need to bother with async code? If you do need to bother with async code here’s how you do that. Our hello() async function stays the same but we need to wrap it in asyncio Future object and pass whole lists of Future objects as tasks to be executed in the loop.

loop=asyncio.get_event_loop()tasks=[]# I'm using test server localhost, but you can use any urlurl="http://localhost:8080/{}"foriinrange(5):task=asyncio.ensure_future(hello(url.format(i)))tasks.append(task)loop.run_until_complete(asyncio.wait(tasks))

Now let’s say we want to collect all responses in one list and do some postprocessing on them. At the moment we’re not keeping response body anywhere, we just print it, let’s return this response, keep it in list, and print all responses at the end.

To collect bunch of responses you probably need to write something along the lines of:

#!/usr/local/bin/python3.5importasynciofromaiohttpimportClientSessionasyncdeffetch(url):asyncwithClientSession()assession:asyncwithsession.get(url)asresponse:returnawaitresponse.read()asyncdefrun(loop,r):url="http://localhost:8080/{}"tasks=[]foriinrange(r):task=asyncio.ensure_future(fetch(url.format(i)))tasks.append(task)responses=awaitasyncio.gather(*tasks)# you now have all response bodies in this variableprint(responses)defprint_responses(result):print(result)loop=asyncio.get_event_loop()future=asyncio.ensure_future(run(loop,4))loop.run_until_complete(future)

Notice usage of asyncio.gather() , this collects bunch of Future objects in one place and waits for all of them to finish.

Common gotchas

Now let’s simulate real process of learning and let’s make mistake in above script and try to debug it, this should be really helpful for demonstration purposes.

This code is broken, but it’s not that easy to figure out why if you dont know much about asyncio. Even if you know Python well but you dont know asyncio or aiohttp well you’ll be in trouble to figure out what happens.

What happens here? You expected to get response objects after all processing is done, but here you actually get bunch of generators, why is that?

It happens because as I’ve mentioned earlier response.read() is async operation, this means that it does not return result immediately, it just returns generator. This generator still needs to be called and executed, and this does not happen by default, yield from in Python 3.4 and await in Python 3.5 were added exactly for this purpose: to actually iterate over generator function. Fix to above error is just adding await before response.read() .

What happens here? If you examine your localhost logs you may see that requests are not reaching your server at all. Clearly no requests are performed. Print statement prints that responses variable contains <_GatheringFuture pending> object, and later it alerts that pending tasks were destroyed. Why is it happening? Again you forgot about await

faulty line is this

responses=asyncio.gather(*tasks)

it should be:

responses=awaitasyncio.gather(*tasks)

I guess main lesson from those mistakes is: always remember about using “await” if you’re actually awaiting something.

Sync vs Async

Finally time for some fun. Let’s check if async is really worth the hassle. What’s the difference in efficiency between asynchronous client and blocking client? How many requests per minute can I send with my async client?

With this questions in mind I set up simple (async) aiohttp server. My server is going to read full html text of Frankenstein by Marry Shelley. It will add random delays between responses. Some responses will have zero delay, and some will have maximum of 3 seconds delay. This should resemble real applications, few apps respond to all requests with same latency, usually latency differs from response to response.

Server code looks like this:

#!/usr/local/bin/python3.5importasynciofromdatetimeimportdatetimefromaiohttpimportwebimportrandom# set seed to ensure async and sync client get same distribution of delay values# and tests are fairrandom.seed(1)asyncdefhello(request):name=request.match_info.get("name","foo")n=datetime.now().isoformat()delay=random.randint(0,3)awaitasyncio.sleep(delay)headers={"content_type":"text/html","delay":str(delay)}withopen("frank.html","rb")ashtml_body:print("{}: {} delay: {}".format(n,request.path,delay))response=web.Response(body=html_body.read(),headers=headers)returnresponseapp=web.Application()app.router.add_route("GET","/{name}",hello)web.run_app(app)

My async code looks just like above code samples, you can see it in full here . How long will async client take?

On my machine it took 0:03.48 seconds.

It is interesting that it took exactly as long as longest delay from my server. If you look into messages printed by client script you can see how great async HTTP client is. Some responses had 0 delay but others got 3 seconds delay. In synchronous client they would be blocking and waiting, your machine would simply stay idle for this time. Async client does not waste time, when something is delayed it simply does something else, issues other requests or processes all other responses. You can see this clearly in logs, first there are responses with 0 delay, then after they arrrived you can see responses with 1 seconds delay, and so on until most delayed responses arrive.

Testing the limits

Now that we know our async client is better let’s try to test its limits and try to crash our localhost. I’m going to start with sending 1k async requests. I’m curious how many requests my client can handle.

So 1k requests take 7 seconds, pretty nice! How about 10k? Trying to make 10k requests unfortunately fails…

responsesare<_GatheringFuturefinishedexception=ClientOSError(24,'Cannot connect to host localhost:8080 ssl:False [Can not connect to localhost:8080 [Too many open files]]')>Traceback(mostrecentcalllast):File"/home/pawel/.local/lib/python3.5/site-packages/aiohttp/connector.py",line581,in_create_connectionFile"/usr/local/lib/python3.5/asyncio/base_events.py",line651,increate_connectionFile"/usr/local/lib/python3.5/asyncio/base_events.py",line618,increate_connectionFile"/usr/local/lib/python3.5/socket.py",line134,in__init__OSError:[Errno24]Toomanyopenfiles

It says “too many open files”, and probably refers to number of open sockets. Why does it call them files? Sockets are just file descriptors, operating systems limit number of open sockets allowed. How many files are too many? I checked with python resource module and it seems like it’s around 1024. How can we bypass this? Primitive way is just increasing limit of open files. But this is probably not the good way to go. Much better way is just adding some synchronization in your client limiting number of concurrent requests it can process. I’m going to do this by adding asyncio.Semaphore() with max tasks of 1000.

At this point I can process 10k urls. It takes 23 seconds, pretty nice!

How about 100 000? This really makes my computer work hard but suprisingly it works ok. Server turns out to be suprisingly stable although you can see that ram usage gets pretty high at this point, cpu usage is around 100% all the time. What I find interesting is that my server takes significantly less cpu than client. Here’s snapshot of linux ps output.

Overall it took around 5 minutes before it crashed for some reason. It generated around 100k lines of output so it’s not that easy to pinpoint traceback, seems like some responses are not closed, is it because of some error from my server or something in client?

After scrolling for couple of seconds I found this exception in client logs.

so it means my client made around 19230 requests per minute. Not bad isn’t it? Note that capabilities of my client are limited by server responding with delay of 0 and 1 in this scenario, seems like my test server also crashed silently couple of times.

Epilogue

You can see that asynchronous HTTP clients can be pretty powerful. Performing 1 million requests from async client is not difficult, and the client performs really well in comparison to synchronous code.

I wonder how it compares to other languages and async frameworks? Perhaps in some future post I could compare Twisted Treq with aiohttp. There is also question how many concurrent requests can be issued by async libraries in other languages. E.g. what would be results of benchmarks for some Java async frameworks? Or C++ frameworks? Or some Rust HTTP clients?