Over the last few years*, we've seen a considerable amount of pressure to compromise encryption from some parts of industry. For example, in HTTPbis we had several discussions of the role of a proxy in HTTP, including the now-infamous "trusted proxy" proposal <http://tools.ietf.org/html/draft-loreto-httpbis-trusted-proxy20-01> which has been discarded.
For me, this was one of the motivations for the Securing the Web finding; we need to very clearly say that such "interception MitM" by transit operators is not acceptable on the open Web.
At the same time, we see various networks deploying CA certificates into users' trust stores so that they can man-in-the-middle connections to affect policy (filtering, virus scanning, data loss prevention, etc.). This has become fairly common practice in places like large businesses, schools, prisons, etc.; e.g., see the results of <https://duckduckgo.com/?q=install+our+certificate+site%3A.edu>.
In discussions I've had with various browser security people, I'd say that the general feeling (and I'm happy to be corrected!) is that while doing so isn't liked and shouldn't be encouraged, there's a wary acknowledgement that it can't really be stopped; if browsers try to prevent this from happening, those interested will just use another browser, or even roll their own (which is becoming increasingly easy to do <http://www.quirksmode.org/blog/archives/2015/02/chrome_continue.html>). "When you own the client, you own it's behaviour" often comes up.
Notably along those lines, we're seeing rapidly increasing deployment of so-called "split browsers" -- i.e., browsers that are written to use a "cloud" service for some portion of their operation -- anything from vanilla TCP connection handling all the way up to doing pagination and JS in the "cloud" and just sending an image of the current page back to the browser.
Unlike CA certs, split browsers are often used for better performance and/or less bandwidth usage; the users *want* to use this modified version of the Web. They willingly install the on their personal devices.
I've been collecting some stats about these split browsers at <https://github.com/httpwg/wiki/wiki/SplitBrowserSurvey>. Notably, UCBrowser has cracked a 30% market share of *all* browsers (desktop and mobile, even though AFAICT it's mobile-only) in India.
Needless to say, the security properties of split browsers are largely unknown, and what they do usually isn't even documented from what I've seen. Some definitely do turn HTTPS into a three-party protocol. These new-breed browser vendors do own the client, and are turning it to their advantage.
Taking that even further, we continue to see a significant (and often chaotic) market in "extensions" to Web browsers. Again, they have the ability to turn HTTPS into a three-party protocol, due to the nature of their access to the browser's capabilities. Again, users install them because they want some capability, without understanding what they may be trading in return. Considering what a debacle so many of the "browser toolbars" have been for privacy and security, I'm surprised that this hasn't been discussed as an architectural issue before (to my knowledge - pointers?).
We can't (and I think shouldn't) disallow any of these things by fiat, but the current strategy seems to be to ignore them and hope that they won't be misused and won't supplant the "default" unmodified Web. I'm not sure that's realistic; AIUI key pinning and Certificate Transparency are unlikely to ever disallow override by locally-installed CA certs, because of the follow-on move away from browsers that do so; split browsers are the Wild West by definition, and extensions aren't going away any time soon.
Disappointingly, I've heard some people say that users ought to know better when they install one of these things; I don't think that attitude is helpful for the scale of audience we're talking about.
I'm also not suggesting that we "give in" to these techniques by enshrining them in the standards. What I do think we should do is talk about them and their effect on the Web overall. While we can't disallow such techniques (neither standards bodies nor browser vendors have that much power), we can shape their use so that the user is at least informed, and perhaps has some choices. Things like better education and better user experience** can help.
I think this is important because on our current path, all three of these techniques are only going to grow in prevalence, and people will be seeing a "modified/monitored Web" without understanding what's happening to them; it will become normal. While all of these techniques do have legitimate use, they also have misuse, and when deployed against a country's citizens (for example), that misuse can become catastrophic.
As the TAG, do we think these are issues that are a) in-scope, b) important for us to consider, and c) something W3C can improve?
I have some ideas about (c), but wanted to get a sense of what people thought first.
Cheers,
* Really, it's been much longer than that; this is only the latest (and strongest) example.
** Yes, I realise I just made one of the classic blunders in Web Architecture -- "Never talk about security user experience!"
--
Mark Nottingham https://www.mnot.net/