There's one thing that's definitely bad in our results, though, which is that since it's interleaving everything, the high priority stuff is sharing bandwidth with low priority things. Some of that is happening in Pat's "good" example as well, though, with the first hidden image competing with the high priority things.

The test results are ok, with the high priority items getting loaded early. They compete a little with some low prio images that the browser found out sooner about, but that's not the server's fault. I imagine it's the browser's pre-parser kicking in. The high priority font triggered by CSS needs the CSS parsed, which is why it starts slightly after that. Interestingly, the browser seems smart enough to not ask for more low priority images until the high priority stuff is done.

I think the high amount of multiplexing in the test is due to all test resources being fairly large (100kb images). Whereas on a page like the Barack Obama article it's a lot of tiny images. If there's anything that could be improved, it's that behaviour, which presumably comes from nginx. When the high priority font is encountered, for example, it's all that should get pushed through the wire. And yet the server is still slicing in portions of the low priority images the pre-parser requested earlier, alternating the contents of these different streams.

We could attempt to reduce HTTP/2 concurrent streams to a low number: https://nginx.org/en/docs/http/ngx_http_v2_module.html but in the case of that test, it's unclear whether it would just end up pushing the whole low prio images it encountered first before moving on to serving the high prio font. In which case, limiting stream concurrency would be very counter-productive.

Other than that, the http2 module doesn't give us meaningful configuration options that have to do with priorities.