Python's going to have a bit of an awkward time with two completely different sets of ecosystem for threaded vs. asyncio approaches, but it's necessary progress.

One thing I'd be really keen to see is asyncio frameworks starting to consider adopting ASGI as a common interface. Each of quart, sanic, aiohttp currently all have their own gunicorn worker classes, http parsing, and none share the same common interface for handling the request/response interface between server and application.

It's a really high barrier to new asyncio frameworks, and it means we're not able to get shared middleware, such as WSGI's whitenoise or Werkzeug debugger, or the increased robustness that shared server implementations would tend to result in.

> Python's going to have a bit of an awkward time with two completely different sets of ecosystem for threaded vs. asyncio approaches, but it's necessary progress.

It's threaded vs async/await (and even then projects like https://github.com/dabeaz/curio bridge that gap really well), rather than threaded vs asyncio. asyncio is just one (really really poor) implementation of async/await coroutines on Python.

> One thing I'd be really keen to see is asyncio frameworks starting to consider adopting ASGI as a common interface.

From what I've seen of the ASGI spec, it makes it incredibly easy (like most asyncio stuff) to DoS yourself with lack of backpressure. You get your callback called with data, and like all callback systems you can't exactly not get called.

You’ll get new calls into the application on new requests, yes. Request bodies are pulled tho.

You can perfectly well ensure that server implementations properly handle flow control, and if necessary also have a configurable number of maximum concurrent requests.

Either way, those sorts of concerns would be far better addressed by having server implementations against a common interface, than by framework authors having to handle (and continually re-implement) the nitty gritty details of high/low watermarks and pausing/resuming transports.

Though you also want to be able to return promises, I suppose you could allow `return async with`, and you also need to be able to catch them. Not sure if this makes any sense to look further at, but I don't think we're getting arrow functions and the JS syntax without those is pretty clunky.

Me too. Grokked JS promises but async/await threw me. Why is single-threaded async, eg. Node and Python/AsyncIO, such a big deal when we have languages like Elixir, Go and Clojure which have real concurrency and no trade-offs?

I'm interested in exploring ASGI as a common interface for Datasette, but I also want the convenience of a neat request and response object. Is there an ASGI equivalent of something like https://webob.org/ yet?

+1 been using sanic for a couple years, it's an awesome easily grokable micro framework. Sanic is flask-like in a lot of places where a quick perusal of the quart code makes it look like they go an extra mile on the flask scale, but maybe take up some complexity in that process. Up to the consumer on which trade off they pick. I tend to think that since there is no WSGI equivalent for asyncio (something people are thinking about https://github.com/channelcat/sanic/issues/761) you want a slightly different model than flask anyway.

One question, in the asyncio docs (really nice BTW, I hadn't tried that out yet and instantly grasped your example) you mention the common pitfall of `await awaitable.attribute` with missing brackets. In the Migration from Flask docs you give some examples where you need await like `await request.data` and `await request.get_json()` - do these need brackets in the same way or is `request` special? Same deal with `test_client` straight after that.

BTW, do all routes have to be async here - even your quickstart that just returns 'hello'?

One other thing - since you require Python 3.6 anyway it'd probably make sense to use `pipenv` instead of `venv` as your recommended install, it'd probably make your docs simpler.

For the common pitfall question, when you do `await expression` python will resolve `expression` and then await it. When you do `await request.json()`, python calls `request.json()` and then awaits the return value (which will return json, at some point).

The same thing with `await request.data`. You can easily avoid this pitfall by writing this code:

data = await resp.json()
print(data['attribute'])

instead of this:

print((await resp.json())['attribute'])

TBH I find that section of the docs more confusing than helpful, it's not an issue you'd generally run into IMO.

I think this is all well and good, especially the compatibility with Flask. However, the biggest issue is the data access layer. Most flask apps will use an ORM like SQLAlchemy(SA) to create their data access layer.

SA unfortunately, does not have a asynchronous version (its quite complex as it is). Therefore, I think it would require quite a lot of work, in order to actually get a standard flask app to work with quart.

However, if you've built your data access layer directly on psycopg, then I think you're good to go.

One thing I've never been able to reconcile is Mike Bayer's article there, set against Yury Selivanov's reported results for `asyncpg`, which at least as they're presented do indicate a significant difference in throughput.

I started dabbling with Quart a few months ago and it looks really promising. But I haven't developed any apps with it. My plan is to try building an app using it instead of Flask, I like Flask but I think this has some benefits in supporting the current future including websockets.

I don't think that was running through the project author's mind as a factor.

It is interesting to just know the reasoning. Nobody is bashing competition. It's just that GitHub has more mindshare and as such, contributors/forks are more likely, and there are more 3rd party integrations.

Again, competition is good for everyone, but let's not bash someone who is just curious why a project author would come to choose GitLab.