Jeff said:
] When I gave my talk at SIGCOMM last month, John Wroclawski of MIT
] insisted that if the HTTP protocol allows pipelining, it really needs
] to provide a means for aborting requests in progress in case the user
] hits the "stop" button. Otherwise, the user would have to wait for the
] upload of (potentially) huge files.
It also needs a way to interleave responses from the pipelined
requests, else long ones will delay shorter ones. Think about the
concurrent "progressive rendering" of multiple GIF and JPEG files that
currently is done using multiple connections, and what it would take to
do it with pipelining.
[...]
] In a persistent-connection HTTP (P-HTTP), I could close the connection,
] but it would then mean paying an extra round trip (to exchange SYNs)
] before I could do anything else.
]
] That may not be excessive (after all, a round-trip is likely to be
] quite a bit shorter than the time it takes a user to decide to abort a
] request in progress), but perhaps the HTTP 1.x (x >= 1) protocol could
] be changed to support an asynchronous abort mechanism. It would
] probably have to use the TCP Urgent Pointer mechanism, as is done in
] the Telnet protocol to handle similar events.
]
] Since the protocol currently does not have a way for the server to
] abort the transmission of a response once it has started it, presumably
] we would also need to use the Urgent Pointer mechanism to do that.
Well, if you can interleave responses from multiple outstanding
requests, particularly during the transmission of entity-bodies, that
means you can discriminate which request a particular blob of incoming
response data belongs to, and it would seem easy to reserve one
discriminant for "aborted".
Paul