Security theater on the web

Perhaps the most important security concept within modern browsers is the idea of the same-origin policy. The principal intent for this mechanism is to make it possible for largely unrestrained scripting and other interactions between pages served as a part of the same site (understood as having a particular DNS host name, or part thereof), whilst almost completely preventing any interference between unrelated sites.

That’s Michal Zalewski of Google, in his Browser Security Handbook (now also available in expanded form as a book, The Tangled Web). I had thought I understood the same-origin policy, both how it works and what it’s for. Turns out I was totally wrong about the how-it-works part—about how the policy is enforced by the browser. Now that I’ve been straightened out on that point, I’m more confused than ever about the purpose of the policy.

This uncomfortable episode in my education began with my post about knowls, the little drawers full of knowledge that I look upon as a step toward footnotes on the web. In that article I pointed out a problem with the concept of a “knowlpedia,” a public repository of knowls: the same-origin policy won’t allow a web page loaded from one server to incorporate HTML content from another server. Thus when you are reading a web page hosted at bit-player.org, code within that page can freely access knowls that are also stored at bit-player.org, but it cannot retrieve knowls from aimath.org.

Harald Schilly, the author of the knowl code, wrote back that he had a one-line fix for the same-origin problem. The one line is “Access-Control-Allow-Origin: *”; it’s an HTTP header to be returned by the server of the “foreign” content. I was skeptical of this solution. In fact, I was absolutely certain it could not work. Let me explain why.

If a web browser is going to prevent illicit cross-site communication, how can it do so? Easy! Your browser knows the origin of the page you’re looking at right now: It came from http://bit-player.org:80 (where “http” designates the Hypertext Transport Protocol, “bit-player.org” is the host name, and “80″ is the port number). If code within this page tries to access foreign HTML—say by requesting a knowl at http://aimath.org:80—the browser detects the mismatched host names and refuses to allow the request. I had always assumed that in a case like this the requesting packets are never sent from your computer to the foreign server. That’s why I was so sure that no amount of fiddling with server configurations could have any effect on the same-origin policy, because the request would be blocked long before it reached the server.

This was my mental model of same-origin enforcement, and it still strikes me as the most efficient, sensible and even obvious solution. However, the model is totally wrong. Browsers do not block a request that violates the cross-origin rules. The browser merrily sends the request to the foreign server, awaits the response, and then dumps the content of the response without inserting it into the displayed document or otherwise showing it to the user. This behavior seems so pointless and wasteful, and possibly risky, that I had to confirm for myself that it really happens. It’s not hard to do so. The debugging tools built into modern browsers will show you the headers of each request and response. Here’s what Firefox reports when I try to get a knowl from aimath.org:

Note that the response headers indicate a content length of 489 bytes. None of that content is actually loaded into the page or displayed to the reader, but the server is sending it. I confirmed this with a packet sniffer (Wireshark) that intercepts data moving over the network connection. The full content of the requested knowl is sent back to the browser, but then it’s deep-sixed before anybody sees it.

What I don’t get is why browsers implement the same-origin policy in this roundabout way. What’s the point of sending the request if you know you’re going to ignore the response? I suppose web-site redirection is one scenario where sending the message might not be futile: If aimath.org redirects the request back to bit-player.org, then the transaction can be allowed to proceed. But how common is that?

Very likely there’s some other good reason for doing it this way. (Having been persuaded that my first hypothesis was totally bogus, I’m willing to entertain the possibility that I still don’t understand clearly.) But none of the tutorials and reference documents I’ve consulted (see list below) have explained it to me.

The “Access-Control-Allow-Origin: *” header that Schilly mentioned is part of a recent W3C draft standard called Cross-Origin Resource Sharing, or CORS, that lifts some of the strictures imposed by the same-origin policy. For simple requests, the browser will accept and display results from a foreign site if the appropriate header is included in the response. Thus the third-party site is given a measure of control over whether or not cross-origin requests are allowed. The CORS proposal goes back to 2005, but browsers began supporting it only in 2009 or 2010. (Opera hasn’t caught up yet; Internet Explorer does it a little differently.)

I don’t pretend to understand all the implications of this change in the way the web works. Presumably, the scenario the designers have in mind is something like this:

Naive User visits sneakthief.com, a web site that plays amusing videos of kittens while running a JavaScript program that requests cross-origin access to fortknox.com, sending along the cookies that authenticate Naive User as an acount holder at Fort Knox. If fortknox.com responds with the keys to the vault, they will be transmitted back to sneakthief.com. The protection against this outcome is our faith that fortknox.com will not carelessly set the Access-Control-Allow-Origin header. I would have felt a little safer if the response from fortknox.com were blocked unconditionally, regardless of header flags. And safer still if the web worked my way, and the request were blocked before it could even be sent.

Zalewski comments that the main rationale for introducing CORS is that there are so many other ways of undermining or circumventing the same-origin policy (iframes, server-side proxies, JSONP, hidden forms, Flash, Java) that we might as well build a well-structured and well-documented facility for doing what everybody is doing anyway. In other words, leave the doors unlocked so nobody will smash a window while breaking in.

This may well be the wisest policy. Zalewski offers this meditation in the epilogue to his book:

I am haunted by the uncomfortable observation that in real life, modern societies are built on remarkably shaky ground. Every day, each of us depends on the sanity, moral standards, and restraint of thousands of random strangers—from cab drivers, to food vendors, to elevator repair techs…. In this sense, our world is little more than an incredibly elaborate honor system that most of us voluntarily agree to participate in. And that’s probably okay….

It’s difficult to understand, then, why we treat our online existence in such a dramatically different way…. The only explanation I can see is that humankind has had thousands of years to work out the rules of social engagement in the physical realm…. Unfortunately for us, we have difficulty transposing these rules to the online ecosystem, and this world is so young, it hasn’t had the chance to develop it’s own, separate code of conduct yet.

At least you’ve clarified what Access-Control-Allow-Origin is trying to do - stop a rogue site using Fred’s cookies to log onto Fred’s bank and steal his money. I was kinda mystified as to that - it obviously doesn’t help a web service avoid access by things that aren’t *their* application.