When I'm visiting a website on the public internet, the website can cause my browser to send requests to a local IP address (such as 10.0.0.1). This can be used to attack internal web sites, e.g., through
CSRF attacks.

Why do browsers allow this? If browsers prohibited this, would this increase security? If browsers prohibited this, would it break sites, and how large would the impact be? Can we quantify the negative impact, or the security benefit?

Reference: see Jeremiah Grossman's comments in Browser Security Case Study: Appearances Can Be Deceiving, ACM Queue vol 10 no 10, November 20, 2012. He raised this question, and comments on two possible reasons, though the article doesn't quantify how many sites would be negatively affected by such a modification (to be fair, that was probably beyond the scope of the article).

There are a LOT of questions in this one question. In one specific case it would break things that depend on the ability to communicate across sites via the browser, e.g. passive federation scenarios.
–
SteveApr 7 '13 at 7:59

@SteveS, thanks. I'm not sure if I'm following your specific case -- do you know of an example? Do companies use federation between a public website and a private (intranet-only) site? That seems like a rather odd thing to do, but maybe I'm just unfamiliar with the breadth of current practice.
–
D.W.Apr 7 '13 at 8:06

It's an increasing trend. Host the IdP in the cloud and federate it to internal apps that haven't moved. Though the opposite is the more usual case.
–
SteveApr 7 '13 at 8:23

2 Answers
2

I think it would break SAML (and other similar federation/SSO systems). This necessarily has a model where the Relying Party (likely on a public network) has to send the user to an IdP (eg an ADFS box in the Windows domain in the private network) and back out again to log in.

Also in general private IP ranges and private networks are not the same thing; it's not uncommon for an enterprise to have a part of its private network on publically-routable IPs. Plus of course semi-private extranets. These might need to make reference to resources on the private-IP part of the network, and they wouldn't receive protection from a control based only on private ranges.

So it couldn't be a blanket ban. You'd have to introduce a whole new set of whitelisting controls for what sites are allowed to reference what networks. Is the benefit worth the extra complexity managing this would bring? Maybe, but it's not a clear win.

The problem is not limited to private IP's, and poses just as big of a problem to public services. If you were to force a same-domain policy on all site assets, you would encounter plenty of problems with sites who have multiple domains (google, yahoo, facebook) and anyone who uses CDN service.

Obviously this wouldn't be the case when talking strictly private IP space, but I do not think that is the answer to the problem. It may prevent sites from automatically attacking private resources, it's still trivial to get an unknowing user to click a link pointed at the same thing.

In the end, the responsibility lies in the hands of the target server. The server should be taking steps to mitigate CSRF attacks, and only parse GET parameters for public/non-sensitive data.

Thanks for the thoughts. "If you were to force a same-domain policy on all site assets" -- Well, it's not what I asked about. I asked about the narrow case of blocking requests to private IPs from being triggered by public sites. "it's still trivial to get an unknowing user to click a link pointed at [a private address]" -- good point, and maybe the best response to my question so far. Thank you!
–
D.W.Apr 7 '13 at 8:17