When successful, the attacker can decrypt any value sent to the client encrypted with the ASP.NET key. Examples of these are authentication cookies and encrypted View State (this last one is not necessarily encrypted to begin with depending on the configuration).

Having the ability to decrypt any value, also gives the attacker the ability to re-encrypt a modified version of the data. It’s not without restrictions, as the mechanism that allows to do so requires a piece of garbage data the attacker can’t control to end somewhere in the message. The attacker controls where in the message the garbage appears, depending on the message the garbage can be put in a location that is irrelevant or have a side effect the attacker doesn’t care about. The size of the garbage matches the block size, so we are talking 16 bytes for a 128 bit encryption.

It’s important to note that at this stage the attacker doesn’t hold the actual keys. This is important, as it means the attacker can’t get through signed values. Even if the ability to re-encrypt were related to the signing of the values, the attacker will still end with garbage somewhere in the message, making it very hard to produce a valid signature.

So far the attacker gets any sensitive information stored in all those encrypted values sent to the client. So let’s take a look at some scenarios:

Ability to view all view state data.

By default the data isn’t really encrypted, so it’s usually not new you can get to that data.

When following best practices there isn’t anything in there that matters, and also the app shouldn’t rely on it being tamper proof.

Above said, if it’s configured to validate the View State using HMAC, it prevents the attacker at this stage from posting modified View State.

While there may not be sensitive information in the View State, it could still be the padding oracle the attacker needs.

Ability to read session cookies.

This is totally unrelated to the attack since these cookies contain an encoded value generated by a random cryptographic provider (so it can’t be predicted). In other words, these cookies aren’t encrypted at all.

For extra protection, you can store a value you can relate to the authentication in the session and check authentication vs. session to make sure the user owns that session. Of course that won’t work if the feature works for anonymous users, although you could emit a special ticket for those users if it’s important to protect the session.

Ability to read the authentication cookie.

This just allows the user to see the username, and how the rest of the structure of the ticket is created.

As the authentication ticket in the cookie is signed, the attacker can’t forge other user’s cookie with the padding oracle attack so far.

If the attacker sniffed an authentication cookie/ticket, then the account was already compromised, regardless of this attack.

As the authentication cookies are always sent in each request, the only way to really protect them would be to have all requests that send the cookie back through https. See the “Cookie hijacking” section in http://en.wikipedia.org/wiki/HTTP_cookie

Any custom use of encryption using machine key in the site.

The data will be able to be decrypted after the successful attack.

It could also be attacked directly if no measures that consider padding oracle exposure are taken. Even with the Microsoft workaround mentioned in the advisory, the code might suffer from the issue / by not dealing with the invalid data as it were a failed decryption.

WebResource.axd – here gets uglier:

This feature is used by ASP.NET to allow serving embedded resources. An example of these would be embedded CSS files that belong to a custom control and are embedded for ease of distribution.

This special handler only uses decryption to receive which embedded resource it will use. This means the attacker can forge valid requests / putting some garbage in an irrelevant part of the message.

The request includes which assembly the resource is in, so the attacker has gained the ability to access any embedded resource in any assembly accessible to the application.

ScriptResource.axd – the real deal:

This feature is related to serving combined script files into one (js), which might belong to multiple controls, but it’s not restricted to it. That’s what it’s supposed to be for, but I can’t confirm if it has other intended users.

It works just like WebResource.axd, only encryption / no signing.

Its scope is beyond just serving embedded resources, it also serves files.

The files served aren’t restricted to be JavaScript files, so it can serve any file.

The above means the attacker has gained the ability to access any file accessible to the application. It’s not restricted to only js files, and doesn’t take any special action to block special files. One of such files is the web.config.

ASP.NET normally uses the handlers configuration to determine which files are not accessible, so web.config is configured with an access forbidden handler. This specific mechanism isn’t used by ScriptResource.axd.

The workaround to prevent the attack from reaching this stage is at the beginning.

It’s obvious that gaining access to the web.config is very serious in ASP.NET (not dismissing all the rest of the accesses so far). The file is considered by many as something specially protected by ASP.NET that shouldn’t ever be exposed from the ASP.NET application. Sure it’s a wrong assumption, but it’s a common one. Not assuming is secured means extra complexity, like using encrypted values and having the key elsewhere in the server / with the corresponding configurations, permissions and then dealing with deploying that in web farms scenarios.

While special access info might be exposed in the web.config, there could be other measures in place that prevent doing something with it. Things like the database not being accessible from outside, other pieces being behind a firewall, extra measures to get in, etc. All that said, it eases the attacker’s job a lot on preparing an attack at those levels.

Something that sometimes happens with web.config, specially on web farms scenario / or hosted scenarios, is that other keys like the machinekey ends up being set in the web.config. Clearly, the consequences with the current vulnerability in the wild are catastrophic. Once the attacker gains access to those keys, the ability to sign is gained, which means any request can be forged, signed or not; can now forge an authentication cookie. At least with the default forms authentication implementation, there is a question about it here: is-there-a-different-way-to-do-asp-net-forms-authentication-thats-already-built.

The DotNetNuke was an ASP.NET application that they picked as a demonstration of the size of the issue. The default install generates machinekey at the site level, so by using the oracle padding attack, they get to forge an authentication user of an admin. What is worst is that they combined the new access the admin account gives them with a separate vulnerability/attack and can manage to get up to System privilege on the target server.

If you haven’t after seeing the links at the top, go and apply the workaround to your applications now. If you want to know about what a padding oracle attack is, check out: Understanding Padding Oracle Attacks. If you are in ASP.NET MVC, Sharepoint or anything else that’s hosted in ASP.NET, yes you are vulnerable if you don’t have the workaround in place (or good equivalent).

7 comments:

Can you develop on this part :# Its scope is beyond just serving embedded resources, it also serves files.# The files served aren’t restricted to be javascript files, so it can serve any file.# The above means the attacker has gained the ability to access any file accessible to the application. Its not restricted to only js files, and doesn’t take any special action to block special files. One of such files is the web.config.

So far, I haven't found any way to serve files that are not embedded resources (from tests + reading code from Reflector)

That's right, you can see confirmation of it here: http://blogs.technet.com/b/srd/archive/2010/09/17/understanding-the-asp-net-vulnerability.aspx

"If the ASP.Net application is using ASP.Net 3.5 SP1 or above, the attacker could use this encryption vulnerability to request the contents of an arbitrary file within the ASP.Net application. The public disclosure demonstrated using this technique to retrieve the contents of web.config. Any file in the ASP.Net application which the worker process has access to will be returned to the attacker."

Thanks for explaining this Freddy, the pre / post SP1 situation is obviously a bit of a double-edged sword. I'd love to see more info on what you've found in the ScriptResourceHandler if you're looking for more material.

Your posting is one of the few posts out there that comes very close to describing the issues. Contrary to what others have posted, the padding oracle exploit does not try to find the encryption key -- it only tries to find the decrypted form of the encrypted data (unless the encryption key was somehow copied over into that encrypted data). It also facilitates a forgery of an encrypted request (again not necessarily knowing the encryption key). It's only if the forged request was successful that results in the download of an arbitrary file.