VULNERABILITY DETAILS
DNS rebinding attacks can be mounted against non-HTTP services to steal their responses cross-protocol. Non-HTTP services' responses can be read via XHR as their response streams will be interpreted by Chrome as `HTTP/0.9`.
This is an issue because, HFPA attacks[0] aside, many services that bind to localhost don't require authentication, and haven't factored browsers being able to read their responses into their threat model.
For example: MySQL *cannot* be attacked via a standard HFPA attack, as it closes the connection due to the handshake failing while the browser is still writing the HTTP headers; however, being able to read the server's half of the handshake tells us a lot about the server itself[1].
Memcached *can* have its contents manipulated via a standard HFPA attack, but this behaviour allows actually leaking the database as well.
Likely all services not bound to blacklisted ports that have "fault tolerant" parsers, or that leak sensitive data before or during the handshake are vulnerable.
VERSION
Chrome Version: 49.0.2623.110 (64-bit) stable
Operating System: Ubuntu 15.10 as well as OS X / Windows 10
REPRODUCTION CASE
* To simulate a non-HTTP service bound to the loopback run `python -c 'print("\x00\x01I am not an HTTP response\r\n\r\nfoo\x00")' | nc -l 127.0.0.1 11212` on the same machine as your browser
* Start up a webserver on another host on port `11212` and have it serve the following document:
<pre id="output"></pre>
<script>
var outputElem = document.getElementById("output");
outputElem.textContent = "Waiting 120s to fetch";
setTimeout(function() {
var xhr = new XMLHttpRequest();
xhr.open("GET", "http://" + window.location.host + "/?a=" + (new Date()).getTime());
xhr.onload = function() {
outputElem.textContent = xhr.responseText;
};
xhr.send();
}, 120000);
</script>
* Create an entry for "rebinding.example.com" in your `/etc/hosts` file that points to the host running your webserver
* Load "http://rebinding.example.com:11212/" in your browser
* To simulate DNS rebinding, edit the "rebinding.example.com" entry in your `/etc/hosts` file and make it point to `127.0.0.1`
* After 120 seconds the `<pre>` should contain `\x00\x01I am not an HTTP response\r\n\r\nfoo\x00`.
A hosted version of this PoC that uses a custom DNS server[2] to swap the records after the first resolution is at <http://(unique_string).bondage.computersareca.ca:11212/tests/simplest_rebinding.html>.
I've also made PoCs that abuse this behaviour to dump local Memcached[3] and Redis[4] databases, as well as the MySQL[5] server handshake. Screenshots for all of these are attached.
Note that this rebinding method requires waiting longer than 60 seconds so the entry in Chrome's resolver cache gets evicted, but a more efficient method may exist. I didn't spend much time on that.
REMEDIATION
Mitigations for DNS rebinding attacks on HTTP services are well understood, the server forcing the use of HTTPS or verifying the `Host` header[6] in the request mitigates the worst issues. What's less clear is how non-HTTP services bound to non-blacklisted[7] ports should protect themselves.
Previous attacks on non-HTTP services via DNS rebinding relied on now-fixed vulnerabilities in plugins and their TCP socket APIs[8]. This is still an issue due to browsers interpreting anything that doesn't specifically declare itself as HTTP to be `HTTP/0.9`.
I think the best thing would be for DNS servers to filter responses containing "private" addresses, including loopback addresses; however, these filters are not commonly enabled, and are generally insufficient. Many that I've audited do not block loopback addresses, ignore IPv6 or improperly handle its edge cases, etc.
With that in mind, I think the *most reasonable* place to mitigate DNS rebinding attacks on non-HTTP services is in the browser. Since updating the port blacklist for every new service that comes out isn't feasible, we should instead restrict where `HTTP/0.9` is allowed.
The easiest fix would be to disallow HTTP/0.9 on "unusual" ports. If that was too restrictive, we could disallow HTTP/0.9 only if the request was for a subresource or subdocument over an "unusual" port. Loading `HTTP/0.9` at the top level should still be safe.
I'd be interested to hear your thoughts on how this should be mitigated, or whether mitigation is the browser's responsibility at all, as this behaviour is present in multiple browsers and I couldn't find any previous mentions of DNS rebinding + HTTP/0.9 attacks.
[0]: https://www.jochentopf.com/hfpa/hfpa.pdf
[1]: https://dev.mysql.com/doc/internals/en/connection-phase-packets.html#packet-Protocol::Handshake
[2]: https://github.com/JordanMilne/FakeDns
[3]: http://computersareca.ca/tests/rebinding_frame.html?memcached
[4]: http://computersareca.ca/tests/rebinding_frame.html?redis
[5]: http://computersareca.ca/tests/rebinding_frame.html?mysql
[6]: https://bugs.chromium.org/p/chromium/issues/detail?id=98357#c2
[7]: https://code.google.com/p/chromium/codesearch#chromium/src/net/base/port_util.cc&q=remotefs&sq=package:chromium&type=cs&l=71
[8]: http://www.adambarth.com/papers/2009/jackson-barth-bortz-shao-boneh-tweb.pdf

I just updated my tests and confirmed that <http://computersareca.ca/tests/rebinding_frame.html?memcached> and friends also repro in Firefox, OS X Safari, and MS Edge. I haven't figured out how to perform rebinding attacks on IE<=11 yet, as its pinning behaviour is even harder to understand than Edge's, but it appears that this issue is pervasive.

This now being tracked on Moz's Bugzilla as well, #1262128. I'm waiting until I have more stable repros for Edge and Safari before reporting to them. Since this is a cross-browser issue we should probably coordinate disclosure.

RE #6: Do we have any telemetry on this?
Edge was hesitant to remove HTTP/0.9 support because Chrome had it. Chrome appears to have had it from the original Firefox stack. Firefox supported 0.9 due to buggy servers of the era (e.g. see https://bugzilla.mozilla.org/show_bug.cgi?id=193921)
The only *common* occurrence of HTTP/0.9 I'd ever seen were some buggy/misconfigured CDN servers used by Yahoo circa 2006 that would occasionally send images without headers, and these worked fine with IE but blew up Fiddler, so I had to write code to accommodate such responses. Those servers have been long fixed, as I haven’t seen an error like this on the public internet for a long time. The last time I saw a site with this issue was 2014:
http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=0&p=1&f=S&l=50&Query=IN%2FEric+Lawrence%0D%0A&d=PTXT
Based on all of the browsing I do these days (virtually all of which is through Fiddler) I don’t believe that there’s any public-facing webserver still relying on this support. It’s possible that there’s some legacy device somewhere that was “getting lucky” but it seems low-risk to disable this by default for the browser. But it's entirely possible that there's 0.9 coming from some IoT or other device that's not globally exposed...

#9: I asked mmenke if we had numbers about this - it sounds like there was UMA from before where we would guess that HTTP/0.9 exists but it's not in place any more. I'd be interested in what it would take to get back in as it's possible we could just get by with removing HTTP/0.9

The histogram is still around: Net.HttpHeaderParserEvent. ~0.01% of HTTP 0.9/1.x (i.e. not including H2/QUIC) responses are HTTP/0.9 on stable (Closer to 0.02% on canary), and 0.025% of those 0.01% are over HTTPS. Worth noting that that's how often we interpret the response as HTTP/0.9, not how often it actually is HTTP/0.9. Responses from broken servers can be interpreted as HTTP/0.9 as long as they don't seem to have an HTTP/1.x status line.

I'd support removing HTTP/0.9. Its continued existence has bugged me since before I started working for Google, I'd love to see it go. My only concern is about breaking stuff that depends on it. Guess we can't really know how bad the fallout will be without just trying it.

Only thing that concerns is me about killing off HTTP/0.9 is about 1% of
users seem to hit this over the course of the week. Per-request basis is
below reasonable threshold. It does seem like it's hard to tell if it's
actual HTTP/0.9 responses or just garbage being interpreted as HTTP/0.9

I remain convinced that disabling HTTP/0.9 globally is low-risk.
Unfortunately, DNS rebinding means that a lot of other mitigations we could take (e.g. only allow for main frame requests, disallow for cross-site requests) probably wouldn't help much.
The proposal in the original issue ("disallow HTTP/0.9 on "unusual" ports") does indeed seem likely to be the easiest mitigation if we wanted to maintain support for HTTP/0.9.

Hi Folks, just wanted to give you a heads-up that I intend to publicly disclose this as part of some related inter-protocol exploitation research in 3 months (August 31st.)
If you wish to co-ordinate with the other vendors then the updated bug references are: Mozilla Bugzilla #1262128, Apple ProdSec Followup #639367943, MSRC Case 33254.

therealmarv: Also seems like if a NAT router is using a default login/password, and doesn't have any defense against this sort of thing (Force being accessed by a certain hostname or something), an attacker may also be able to reconfigure your router by similar means, unless I'm missing something.

mmenke: Yes, that's the classical DNS rebinding attack. Using HTTPS is the best defense (due to enforced hostname verification) but there are other defenses being explored; see e.g. https://mikewest.github.io/cors-rfc1918/