Just a random question, will 2.0 support secure sockets the same way as 1.1 and will there be any causes for concern over how ssl is to be implemented for 2.0?
Thanks,
Jose
-----Original Message-----
From: Amos Jeffries <squid3@treenet.co.nz>
Date: Fri, 27 Jan 2012 15:01:06
To: <ietf-http-wg@w3.org>
Subject: Re: Rechartering HTTPbis
On 27/01/2012 12:29 p.m., Adrien de Croy wrote:
>
>
> On 27/01/2012 12:11 p.m., Willy Tarreau wrote:
>> On Fri, Jan 27, 2012 at 11:45:56AM +1300, Adrien de Croy wrote:
>>> I wouldn't rely on support for Upgrade. Since there's basically no
>>> deployed users of it (I've never seen an upgrade header in 17 years) I
>>> would expect it to break on many intermediaries, and so the test for
>>> HTTP/2.0 support wouldn't be reliable.
>> It's currently being used for WebSocket. Granted it breaks on a number
>> of intermediary but what was observed is a fast and clean failure,
>> which means that the client can fallback to the good old protocol.
>> Adding support for Upgrade to existing components is quite easy so
>> most products will evolve to correctly support WebSocket and will
>> provide you with Upgrade for free.
>>
>>> Therefore it's likely everyone will choose instead CONNECT to tunnel,
>>> it's reliable.
>> It's not designed for origin server usage, and is not reliable across
>> intercepting proxies, because it's a proxy-only method which is detected
>> by some proxies. I've experimented with that at one mobile phone
>> operator
>> too. Sending a CONNECT on port 80 (which was intercepted by a modified
>> Squid) would simply result in a hang, probably because the outgoing
>> connection did not look like HTTP and was not sent to the proper
>> components.
>
> connect should only EVER be sent to a proxy the client knows about.
> So it should in general not be intercepted.
> anything that intercepts CONNECT should pass it through to the
> original destination.
>
> WinGate supports interception of CONNECT, but this only happens if the
> client is configured to use a proxy outside the firewall.
>
> That's generally undesired behaviour in corporate environments.
>
>>> In fact due to the number of sites moving to https, the incidence of
>>> traffic bypassing everything with a tunnel through the proxy is
>>> becoming
>>> a bigger and bigger problem.
>> Which is why you suggested the "GET https://" scheme which I agree with.
>>
>>> Hence my previous statement about deprecating CONNECT, and making
>>> sure a
>>> corporate proxy has access to the payload and http protocol.
>> +1.
>>
>>> It would be trivially simple for a client to issue GET https://etc
>>> instead of tunnelling... in fact it would simplify client code a fair
>>> bit.... if the client could rely on the proxy supporting it.
>>>
>>> So... maybe we need an OPTIONS command specifically for proxies, to
>>> enable a proxy to advertise supported protocol features.
>> It could be a good idea though it would require one additional round
>> trip,
>> and would still not indicate whether intermediaries correctly support
>> the
>> same features.
>
> Actually it wouldn't require an additional round trip. the client
> could make the request once, each time it starts and only to the proxy
> it's configured to use.
>
> Maybe it should be called PROXYCAPA or something. And it needs to
> return information in a machine-readable format unlike OPTIONS.
POCO (Point orf contact options) ? its shorter too.
>
> The proxy could advertise:
>
> * support for https URIs
> * support for tunnelling (absense means no support, don't send)
> * support for upgrade, and to which protocols
> * support for progress notifications
> * etc
>
>>
>>> POP3 has it, SMTP has it, maybe it's HTTPs turn now :)
>> POP, SMTP etc are hop-by-hop. HTTP is end-to-end with invisible hops
>> everywhere and their own incomplete implementation. *this* is the
>> problem.
>> Regarding this, PHK is right that if we want correct implementations of
>> the next spec, it should be short and easy to understand.
>
> +1, complexity has always been a big problem.
>
> However I don't see that going away, since entity headers are
> intermingled in the transport protocol.
Re-read the first few messages of the thought experiment between Willy,
PHK and myself. One of the early adjustments was to split the headers
into two distinct groups along that boundary, with fixing that problem
in mind.
>
> Even the name implies it's just a transfer protocol. SMTP, FTP etc
> all restrict their activity purely to transferring things.
>
> HTTP blurred the boundaries, and cares about what it is transporting.
> Personally I think it should not, and entity headers should be deemed
> part of the payload.
>
> But content-negotiation made that difficult, and we can't really back
> out of it.
Yes we *can*. With the step to HTTP/2 it could be perfectly reasonable
to start with one-liner hop-by-hop transport negotiation round trips
like FTP and SMTP use.
Slow, but possible.
Give it some thought and make a proposal for how to do it fast :). The
next time we get a chance to fix that problem it will be HTTP/3.
>
> I also wonder whether some of the subtleties of content negotiation
> have largely made no splash at all... like the convoluted q values.
> I'd like to see them go.
>
Same here. But, we can simply declare such negotiation and entity
details as outside of the transport protocol in a HTTP/1.x internal
wrapper blob of some type and leave the details unchanged. SPDY did this
but wrapped the whole lot.
> I'd also like to see the Accept header go. It's just mindless bloat.
>
> Expect can go as well IMO
Maybe, Expect or Prefer might be of some use on the intiial OPTIONS
request if that ends up happening. Later requests of course dont really
need it since they can assume some minimum level of feature support.
AYJ