On Feb 12, 2013, at 1:59 AM, Nico Williams <nico@cryptonector.com> wrote:
> On Mon, Feb 11, 2013 at 4:46 PM, William Chan (陈智昌)
> <willchan@chromium.org> wrote:
>> Theoretically possible is one thing. But the moment we get into the game of
>> trying to carve up portions of BDP via per-stream flow control windows for
>> prioritization purposes in normal operation (as opposed to just trying to
>> make reasonable progress under excessive load), I think we're in trouble,
>> and likely to get into performance issues due to poor implementations. As
>> I've stated before, I hope most implementations (and believe we should add
>> recommendations for this behavior) only use flow control (if they use it at
>> all, which hopefully they don't because it's hard) for maintaining
>> reasonable buffer sizes.
>
> Right. Don't duplicate the SSHv2 handbrake (Peter Gutmann's term) in HTTP/2.0.
>
> Use percentages of BDP on the sender side. Have the receiver send
> control frames indicating the rate at which it's receiving to help
> estimate BDP, or ask TCP. But do not flow control.
>
> Another possibility is to have the sender (or a proxy) use
> per-priority TCP connections.
I don't think that one solves the problem. A server has to consider priority as relative to the TCP connection, so that high-priority requests trump low-priority requests within the same connection, but not low-priority requests in another connection. Otherwise we have a fairness issue even without proxies.
So you're effectively creating several streams, each with all requests having the same priority. The server will then try to be fair to all connections, effectively giving the same performance to high-priority and low-priority requests.
Yoav