Cross-Site XMLHttpRequest

I’ve just finished writing up some docs on the new Cross-Site XMLHttpRequest feature in Firefox 3. I was a little worried at first, but it definitely appears to be both easy-to-implement and easy-to-use. Specifically, it’s an implementation of the W3C Access Control working draft (which is respected by Firefox’s XMLHttpRequest).

In a nutshell, there are two techniques that you can use to achieve your desired cross-site-request result: Specifying a special Access-Control header for your content or including an access-control processing instruction in your XML.

More information can be found in the documentation but here’s a quick peek at what your code might look like:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Change this to allow="yourdomain.com" to make it accessible to your site, or allow="*" for ANYONE to be able to access it. -->
<?access-control allow="ejohn.org"?>
<simple><name>John Resig</name></simple>

This is same-old pure-blood JavaScript/DOM/XMLHttpRequest, as we’re use to it. For some limited applications, I think this functionality is already going to be terribly useful – and once wider adoption starts to trickle in we can certainly see a whole range of applications, especially in the area of client-side applications and mashups.

Maybe I’m a real killjoy, but I worry about careless web authors implementing things like allow as above, and then letting data leak via ‘legitimate’ XSS.

Any programmer who /understands/ these concepts should set their code to carefully allow only certain sites access, and/or have generic levels of access to public sites…but there a /lot/ of PHP-‘users’ who don’t know half of what they entered into an editor.

I would certainly /hope/ a bank wouldn’t do something stupid like implement this carelessly, but if they did*, or some up-and-coming Facebook-like site did it, some people could have a very bad day. I’m sure these factors were considered already, but I still find it troubling to be breaking down the walls of security present in current browsers, for the sake of Web 2.0.

What exactly is the reason we need this? Has anybody here really understood why XMLHttp is currently limited to one host and cannot communicate cross-domain? I really do not understand that. If XMLHttp cannot do this by default, why it is still possible to load scripts and images from other servers? Why can I do exactly the same type of cross-domain communication using Flash, maybe using Silverlight in the future? What is the original reason for this limitation? Is this documented anywhere?

If, as mentioned in the spec, HTTP DELETE is problematic, because it may delete data, why cannot we filter such actions when detecting a cross-domain communication? GET and POST are possible in the same way when submitting simple form. It is even possible to generate these form elements dynamically. And this also works cross-domain. At least these two HTTP methods should be enabled by default to allow cross-domain communication. The open web, as often mentioned by Alex Russell, really needs features comparable with closed source software e.g. Flash or Silverlight.

I’m still under the impression – and correct me if I’m wrong – that all these means are tailored to protect the server and its documents. But I thought the issue was to protect the client! Server protection has been around for ages. If I don’t want a certain domain to retreive my document, I simply add an access restriction to my Apache config, and I’m done.

The real challenge is to protect the client, and should be solved on the client, IMHO. Like with any other piece of application software, the user should be able to decide what he wants to allow *per web application*. The browser should prompt the user “This web page wants to connect to … Do you allow this?” as soon as it hits an X request. And the user decides whether he trusts the web app or not. Put the control back into the hands of the users. Any malicious content lies on some server, and intruders will make all of it freely available.

I agree with Thomas. I never understood the NEED to modify the client security model to allow for this. If this is something the software needs to do, then the developer can implement a proxy on the server side. At least in this way the developer has sole discretion on the connections. Just more to go wrong if you ask me.

Can anyone point me to an explanation of why this particular mechanism is â€˜secureâ€™? If you can get a script to load from an arbitrary URL, then surely you can make it point to a site that you control?

The only thing this would seem to achieve is that web site owners will get tons of requests from people to add this header or processing instruction, because they donâ€™t want to set up a proxy. I can see it happening that eventually this will be set de facto on all documents/servers.

Seems to me that you might as well just let people do cross-site requests without any strange requirements like this for the remote content. That way things like W3C Tabulator can work without having to allow special privileges to the page in your browser.

Ok, never mind, I should read the spec before commenting :). Still, I fear that web site owners will be bothered by a lot of requests for this from users, and that they will set this without properly considering the security implications.

Is it really desirable that every web service has to include (or the header) on their pages?

I also hope that non-private HTTP headers such as Content-Type, Content-Encoding and Content-Language will still be accessible in cross-site requests.

I agree with those saying that this spec is misguided. But bothering users too much is also not good. How are they to know in every case what things mean? Further, even communication with the current remote server is already dangerous. We complain when desktop apps report on our behavior but use web sites all the time that do the same. Without a much better security model, I think it’s just a matter of being careful where/how you surf. Not completely unlike being cautious in real life.

The web just plain isn’t secure, and it doesn’t seem to be getting better.

User dialogs tend to be a less than ideal solution. For example, it was one of the major failings in the ActiveX security model. A much more consistent strategy is for the site to provide a policy. This is necessary because the site developers have the best vantage point to determine how their site can be accessed safely. However, the enforcement needs to occur at the client because only the client has complete context of the relationships between multiple sites.

This feature is inevitable. There is a need for it and its going to be implemented. Most people kid themselves about the “new” security issues its going to introduce. Most attacks can be performed today. you want cross site internet/intranet DDOS, use the image tag. There is a business demand for cross domain stuff, there is money to be made. Get over it people. Just start training your programmers on how to use it.

@Ashish: I would prefer you protect your business interests outside the realms of browser technology like Javascript. Those who subvert will continue to do so, which means that this kind of opt-out server-side change will only hinder everyone else.

@Nathan: You do realize that this is opt-in, right? You don’t have to change a single thing in order to maintain the current security model. If you want your documents to be accessible in a cross-domain manner, then you opt-in to the Access-Control scheme – and even then, only for the domains that you specify.

Not a big issue, i’d except anybody using the example to find out by them self, but i prefere a working copy’n’paste solution.
The HTML comment infront of the PHP might force Apache to send out the HTTP header plus the comment as content, making it impossible for PHP to add a HTTP header. You should wrap the comment into PHP like this:
<?php
// Change this to allow <yourdomain.com> to make it accessible to your site, or allow <*> for ANYONE to be able to access it.
header('Access-Control: allow <ejohn.org>');

?>
<b>John Resig</b>

I would like seeing Adobe allowing Flash to also adopt this scheme, so we don’t have to keep synchronising our access configuration between HTTP and HTTP accessed via Flash.

When Mosaic and IE were surging ahead a few of us (Well, lots of us really, but a slight portion of all hands.) pulled out the stops to make sure that folk surfing the very new WWW using Lynx didn’t get marginalized and shut out.

This very lovely delopment leaves Win98SE users sucking fumes. (I can hear the snickers … and they’re none of them intelligent or rational.) FF3 raises the bar to Win2K … how many Win98 boxes are out there right now?

/In effect/ … all rationalizations and prevarication aside, in effect FF3 will cut some users loose.
We aren’t just talking about choice of browsers here: we’re telling folk “Since you can’t upgrade your OS, you can’t use these services”.

Finally, that’s a good thing I wanted to see. But since most of the browsers won’t work with this, it’s just a mere interesting, but still useless feature, “proxies” will stay in the scope for quite a long time :]

Were you ever able to get this working with POST requests? I get a [Exception... "Component returned failure code: 0x80004005 (NS_ERROR_FAILURE) [nsIXMLHttpRequest.setRequestHeader]" nsresult: "0x80004005 (NS_ERROR_FAILURE)" ..] exception without FF3b4 even checking the site for permission.

Um, neither of these examples work in the latest firefox beta (Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9b5pre).
Is the feature currently broken or what? I get this message:Permission denied to call method XMLHttpRequest.open
[Break on this error] xhr.open("GET", "http://dev.jquery.com/~john/xdomain/test.xml", true);
I’d love to this working.

I don’t suppose there is any way to get my hands on a beta copy that has this feature enabled? Just working on some internal testing (access VLAN from DMZ) and having the ability to use XmlHttpRequest would certainly make my life easier.