On Tuesday, July 5, 2011, Roy T. Fielding <fielding@gbiv.com> wrote:
> Yes, but the normal request profile is to request one page and then
> parse it for a bunch of embedded requests (images, stylesheets, etc.).
> In other words, when accessing a server for the first time, the client
> is usually going to wait for the first response anyway. Â After that
> first time, the client can remember how the server responded and
> make a reasonable estimate of what it can do with pipelining for
> later pages.
Ah, it looks like I'm working from a different base assumption than
you are. In my normal request profile, all of the images,
stylesheets, etc. are fetched from a different server than the initial
page request (well, different scheme:host:port, anyway, and it usually
resolves to a different IP address). With the widespread adoption of
CDNs and cookie-free domains, it's rare these days that I'll encounter
a nontrivial website that uses the same base URL for static content as
for the initial page. Thus any insight that the client gains from the
first server's handling of the initial page request can't be applied
to the separate connection(s) needed for the subsequent requests.
> Even if the resources are partitioned across multiple
> servers, there is very little gained by pushing multiple requests
> down the pipe right away because the connection is going to be stuck
> on slow start anyway.
At present, with implementations commonly defaulting to an initial
congestion window of 1 MSS, it's still possible to send multiple
requests as long as their total size fits in a single packet. And on
the server side, the conventional wisdom seems to point toward the use
of a much larger initial congestion window nowadays.
Thus I think it would be beneficial for a client to begin pipelining
requests immediately upon opening a connection. The big challenge is
that, as you rightly noted,
> The worst case is that both ends get confused and the entire sequence
> has to be started over after a fairly long timeout.
-Brian