30/06/2016

After covering a TrustZone kernel vulnerability and exploit in the previous blog post, I thought this time it might be interesting to explore some of the implications of code-execution within the TrustZone kernel. In this blog post, I'll demonstrate how TrustZone kernel code-execution can be used to effectively break Android's Full Disk Encryption (FDE) scheme. We'll also see some of the inherent issues stemming from the design of Android's FDE scheme, even without any TrustZone vulnerability.

I've been in contact with Qualcomm regarding the issue prior to the release of this post, and have let them review the blog post. As always, they've been very helpful and fast to respond. Unfortunately, it seems as though fixing the issue is not simple, and might require hardware changes.

If you aren't interested in the technical details and just want to read the conclusions - feel free to jump right to the "Conclusions" section. In the same vein, if you're only interested in the code, jump directly to the "Code" section.

[UPDATE: I've made a factual mistake in the original blog post, and have corrected it in the post below. Apparently Qualcomm are not able to sign firmware images, only OEMs can do so. As such, they cannot be coerced to create a custom TrustZone image. I apologise for the mistake.]

And now without further ado, let's get to it!

Setting the Stage

A couple of months ago the highly-publicised case of Apple vs. FBI brought attention to the topic of privacy - especially in the context of mobile devices. Following the 2015 San Bernardino terrorist attack, the FBI seized a mobile phone belonging to the shooter, Syed Farook, with the intent to search it for any additional evidence or leads related to the ongoing investigation. However, despite being in possession of the device, the FBI were unable to unlock the phone and access its contents.

This may sound puzzling at first. "Surely if the FBI has access to the phone, could they not extract the user data stored on it using forensic tools?". Well, the answer is not that simple. You see, the device in question was an iPhone 5c, running iOS 9.

As you may well know, starting with iOS 8, Apple has automatically enabled Full Disk Encryption (FDE) using an encryption key which is derived from the user's password. In order to access the data on the device, the FBI would have to crack that encryption. Barring any errors in cryptographic design, this would most probably be achieved by cracking the user's password.

"So why not just brute-force the password?". That sounds like a completely valid approach - especially since most users are notoriously bad at choosing strong passwords, even more so when it comes to mobile devices.

However, the engineers at Apple were not oblivious to this concern when designing their FDE scheme. In order to try and mitigate this kind of attack, they've designed the encryption scheme so that the generated encryption key is bound to the hardware of the device.

In short, each device has an immutable 256-bit unique key called the UID, which is randomly generated and fused into the device's hardware at the factory. The key is stored in a way which completely prevents access to it using software or firmware (it can only be set as a key for the AES Engine), meaning that even Apple cannot extract it from the device once it's been set. This device-specific key is then used in combination with the provided user's password in order to generate the resulting encryption key used to protect the data on the device. This effectively 'tangles' the password and the UID key.

Apple's FDE KDF

Binding the encryption key to the device's hardware allows Apple to make the job much harder for would-be attackers. It essentially forces attackers to use the device for each cracking attempt. This, in turn, allows Apple to introduce a whole array of defences that would make cracking attempts on the device unattractive.

For starters, the key-derivation function shown above is engineered in such a way so that it would take a substantial amount of time to compute on the device. Specifically, Apple chose the function's parameters so that a single key derivation would take approximately 80 milliseconds. This delay would make cracking short alphanumeric passwords slow (~2 weeks for a 4-character alphanumeric password), and cracking longer passwords completely infeasible.

In order to further mitigate brute-force attacks on the device itself, Apple has also introduced an incrementally increasingly delay between subsequent password guesses. On the iPhone 5c, this delay was facilitated completely using software. Lastly, Apple has allowed for an option to completely erase all of the
information stored on the device after 10 failed password attempts. This
configuration, coupled with the software-induced delays, made cracking
the password on the device itself rather infeasible as well.

With this in mind, it's a lot more reasonable that the FBI were unable to crack the device's encryption.

Had they been able to extract the UID key, they could have used as much (specialized) hardware as needed in order to rapidly guess many passwords, which would most probably allow them to eventually guess the correct password. However, seeing as the UID key cannot be extracted by means of software or firmware, that option is ruled out.

As for cracking the password on the device, the software-induced delays between password attempts and the possibility of obliterating all the data on the device made that option rather unattractive. That is, unless they could bypass the software protections... However, this is where the story gets rather irrelevant to this blog post, so we'll keep it at that.

Going back to the issue at hand - we can see that Apple has cleverly designed their FDE scheme in order to make it very difficult to crack. Android, being the mature operating system that it is, was not one to lag behind. In fact, Android has also offered full disk encryption, which has been enabled by default since Android 5.0.

So how does Android's FDE scheme fare? Let's find out.

Android Full Disk Encryption

Starting with Android 5.0, Android devices automatically protect all of the user's information by enabling full disk encryption.

Android FDE is based on a Linux Kernel subsystem called dm-crypt, which is widely deployed and researched. Off the bat, this is already good news - dm-crypt has withstood the test of time, and as such seems like a great candidate for an FDE implementation. However, while the encryption scheme may be robust, the system is only as strong as the key being used to encrypt the information. Additionally, mobile devices tend to cause users to choose poorer passwords in general. This means the key derivation function is hugely important in this setting.

So how is the encryption key generated?

This process is described in great detail in the official documentation of Android FDE, and in even greater detail in Nikolay Elenkov's blog, "Android Explorations". In short, the device generates a randomly-chosen 128-bit master key (which we'll refer to as the Device Encryption Key - DEK) and a 128-bit randomly-chosen salt. The DEK is then protected using an elaborate key derivation scheme, which uses the user's provided unlock credentials (PIN/Password/Pattern) in order to derive a key which will ultimately encrypt the DEK. The encrypted DEK is then stored on the device, inside a special unencrypted structure called the "crypto footer".

The encrypted disk can then be decrypted by simply taking the user's provided credentials, passing them through the key derivation function, and using the resulting key to decrypt the stored DEK. Once the DEK is decrypted, it can be used to decrypt user's information.

However, this is where it gets interesting! Just like Apple's FDE scheme, Android FDE seeks to prevent brute-force cracking attacks; both on the device and especially off of it.

Naturally, in order to prevent on-device cracking attacks, Android introduced delays between decryption attempts and an option to wipe the user's information after a few subsequent failed decryption attempts (just like iOS). But what about preventing off-device brute-force attacks? Well, this is achieved by introducing a step in the key derivation scheme which binds the key to the device's hardware. This binding is performed using Android's Hardware-Backed Keystore - KeyMaster.

KeyMaster

The KeyMaster module is intended to assure the protection of cryptographic keys generated by applications. In order to guarantee that this protection cannot be tampered with, the KeyMaster module runs in a Trusted Execution Environment (TEE), which is completely separate from the Android operating system. In keeping with the TrustZone terminology, we'll refer to the Android operating system as the "Non-Secure World", and to the TEE as the "Secure World".

Put simply, the KeyMaster module can be used to generate encryption keys, and to perform cryptographic operations on them, without ever revealing the keys to the Non-Secure World.

Once the keys are generated in the KeyMaster module, they are encrypted using a hardware-backed encryption key, and returned to Non-Secure World. Whenever the Non-Secure World wishes to perform an operation using the generated keys, it must supply the encrypted "key blob" to the KeyMaster module. The KeyMaster module can then decrypt the stored key, use it to perform the wanted cryptographic operation, and finally return the result to the Non-Secure World.

Since this is all done without ever revealing the cryptographic keys used to protect the key blobs to the Non-Secure World, this means that all cryptographic operations performed using key blobs must be handled by the KeyMaster module, directly on the device itself.

With this in mind, let's see exactly how KeyMaster is used in Android's FDE scheme. We'll do so by taking a closer look at the hardware-bound key derivation function used in Android's FDE scheme. Here's a short schematic detailing the KDF (based on a similar schematic created by Nikolay Elenkov):

Android FDE's KDF

As you can see, in order to bind the KDF to the hardware of the device, an additional field is stored in the crypto footer - a KeyMaster-generated key blob. This key blob contains a KeyMaster-encrypted RSA-2048 private key, which is used to sign the encryption key in an intermediate step in the KDF - thus requiring the use of the KeyMaster module in order to produce the intermediate key used decrypt the DEK in each decryption attempt.

Moreover, the crypto footer also contains an additional field that doesn't serve any direct purpose in the decryption process; the value returned from running scrypt on the final intermediate key (IK3). This value is referred to as the "scrypted_intermediate_key" (Scrypted IK in the diagram above). It is used to verify the validity of the supplied FDE password in case of errors during the decryption process. This is important since it allows Android to know when a given encryption key is valid but the disk itself is faulty. However, knowing this value still shouldn't help the attacker "reverse" it to retrieve the IK3, so it still can't be used to help attackers aiming to guess the password off the device.

As we've seen, the Android FDE's KDF is "bound" to the hardware of the device by the intermediate KeyMaster signature. But how secure is the KeyMaster module? How are the key blobs protected? Unfortunately, this is hard to say. The implementation of the KeyMaster module is provided by the SoC OEMs and, as such, is completely undocumented (essentially a black-box). We could try and rely on the official Android documentation, which states that the KeyMaster module: "...offers an opportunity for Android devices to provide hardware-backed, strong
security services...". But surely that's not enough.

So... Are you pondering what I'm pondering?

Reversing Qualcomm's KeyMaster

As we've seen in the previous blog posts, Qualcomm provides a Trusted Execution Environment called QSEE (Qualcomm Secure Execution Environment). The QSEE environment allows small applications, called "Trustlets", to execute on a dedicated secured processor within the "Secure World" of TrustZone. One such QSEE trustlet running in the "Secure World" is the KeyMaster application. As we've already seen how to reverse-engineer QSEE trustlets, we can simply apply the same techniques in order to reverse engineer the KeyMaster module and gain some insight into its inner workings.

First, let's take a look at the Android source code which is used to interact with the KeyMaster application. Doing so reveals that the trustlet only supports four different commands:

As we're interested in the protections guarding the generated key blobs, let's take a look at the KEYMASTER_SIGN_DATA command. This command receives a previously encrypted key blob and somehow performs an operation using the encapsulated cryptographic key. Ergo, by reverse-engineering this function, we should be able to deduce how the encrypted key blobs are decapsulated by the KeyMaster module.

The command's signature is exactly as you'd imagine - the user provides an encrypted key blob, the signature parameters, and the address and length of the data to be signed. The trustlet then decapsulates the key, calculates the signature, and writes it into the shared result buffer.

As luck would have it, the key blob's structure is actually defined in the supplied header files. Here's what it looks like:

Okay! This is pretty interesting.

First, we can see that the key blob contains the unencrypted modulus and public exponent of the generated RSA key. However, the private exponent seems to be encrypted in some way. Not only that, but the whole key blob's authenticity is verified by using an HMAC. So where is the encryption key stored? Where is the HMAC key stored? We'll have to reverse-engineer the KeyMaster module to find out.

Let's take a look at the KeyMaster trustlet's implementation of the KEYMASTER_SIGN_DATA command. The function starts with some boilerplate validations in order to make sure the supplied parameters are valid. We'll skip those, since they aren't the focus of this post. After verifying all the parameters, the function maps-in the user-supplied data buffer, so that it will be accessible to the "Secure World". Eventually, we reach the "core" logic of the function:

Okay, we're definitely getting somewhere!

First of all, we can see that the code calls some function which I've taken the liberty of calling get_some_kind_of_buffer, and stores the results in the variables buffer_0 and buffer_1. Immediately after retrieving these buffers, the code calls the qsee_hmac function in order to calculate the HMAC of the first 0x624 bytes of the user-supplied key blob. This makes sense, since the size of the key blob structure we've seen before is exactly 0x624 bytes (without the HMAC field).

But wait! We've already seen the qsee_hmac function before - in the Widevine application. Specifically, we know it receives the following arguments:

The variable that we've called buffer_1 is passed in as the fourth argument to qsee_hmac. This can only mean one thing... It is in fact the HMAC key!

What about buffer_0? We can already see that it is used in the function do_something_with_keyblob. Not only that, but immediately after calling that function, the signature is calculated and written to the destination buffer. However, as we've previously seen, the private exponent is encrypted in the key blob. Obviously the RSA signature cannot be calculated until the private exponent is decrypted... So what does do_something_with_keyblob do? Let's see:

Aha! Just as we suspected. The function do_something_with_keyblob simply decrypts the private exponent, using buffer_0 as the encryption key!

Finally, let's take a look at the function that was used to retrieve the HMAC and encryption keys (now bearing a more appropriate name):

As we can see in the code above, the HMAC key and the encryption key are both generated using some kind of key derivation function. Each key is generated by invoking the KDF using a pair of hard-coded strings as inputs. The resulting derived key is then stored in the KeyMaster application's global buffer, and the pointer to the key is returned to the caller. Moreover, if we are to trust the provided strings, the internal key derivation function uses an actual hardware key, called the SHK, which would no doubt be hard to extract using software...

...But this is all irrelevant! The decapsulation code we have just reverse-engineered has revealed a very important fact.

Instead of creating a scheme which directly uses the hardware key without ever divulging it to software or firmware, the code above performs the encryption and validation of the key blobs using keys which are directly available to the TrustZone software! Note that the keys are also constant - they are directly derived from the SHK (which is fused into the hardware) and from two "hard-coded" strings.

Let's take a moment to explore some of the implications of this finding.

Conclusions

The key derivation is not hardware bound. Instead of using a real hardware key which cannot be extracted by software (for example, the SHK), the KeyMaster application uses a key derived from the SHK and directly available to TrustZone.

OEMs can comply with law enforcement to break Full Disk Encryption. Since the key is available to TrustZone, OEMs could simply create and sign a TrustZone image which extracts the KeyMaster keys and flash it to the target device. This would allow law enforcement to easily brute-force the FDE password off the device using the leaked keys.

Patching TrustZone vulnerabilities does not necessarily protect you from this issue. Even on patched devices, if an attacker can obtain the encrypted disk image (e.g. by using forensic tools), they can then "downgrade" the device to a vulnerable version, extract the key by exploiting TrustZone, and use them to brute-force the encryption. Since the key is derived directly from the SHK, and the SHK cannot be modified, this renders all down-gradable devices directly vulnerable.

Android FDE is only as strong as the TrustZone kernel or KeyMaster. Finding a TrustZone kernel vulnerability or a vulnerability in the KeyMaster trustlet, directly leads to the disclosure of the KeyMaster keys, thus enabling off-device attacks on Android FDE.

During my communication with Qualcomm I voiced concerns about the usage of a software-accessible key derived from the SHK. I suggested using the SHK (or another hardware key) directly. As far as I know, the SHK cannot be extracted from software, and is only available to the cryptographic processors (similarly to Apple's UID). Therefore, using it would thwart any attempt at off-device brute force attacks (barring the use of specialized hardware to extract the key).

However, reality is not that simple. The SHK is used for many different purposes. Allowing the user to directly encrypt data using the SHK would compromise those use-cases. Not only that, but the KeyMaster application is widely used in the Android operating-system. Modifying its behaviour could "break" applications which rely on it. Lastly, the current design of the KeyMaster application doesn't differentiate between requests which use the KeyMaster application for Android FDE and other requests for different use-cases. This makes it harder to incorporate a fix which only modifies the KeyMaster application.

Regardless, I believe this issue underscores the need for a solution that entangles the full disk encryption key with the device's hardware in a way which cannot be bypassed using software. Perhaps that means redesigning the FDE's KDF. Perhaps this can be addressed using additional hardware. I think this is something Google and OEMs should definitely get together and think about.

Extracting the KeyMaster Keys

Now that we've set our sights on the KeyMaster keys, we are still left with the challenge of extracting the keys directly from TrustZone.

Previously on the zero-to-TrustZone series of blog posts, we've discovered an exploit which allowed us to achieve code-execution within QSEE, namely, within the Widevine DRM application. However, is that enough?

Perhaps we could read the keys directly from the KeyMaster trustlet's memory from the context of the hijacked Widevine trustlet? Unfortunately, the answer is no. Any attempt to access a different QSEE application's memory causes an XPU violation, and subsequently crashes the violating trustlet (even when switching to a kernel context). What about calling the same KDF used by the KeyMaster module to generate the keys from the context of the Widevine trustlet? Unfortunately the answer is no once again. The KDF is only present in the KeyMaster application's code segment, and QSEE applications cannot modify their own code or allocate new executable pages.

Luckily, we've also previously discovered an additional privilege escalation from QSEE to the TrustZone kernel. Surely code execution within the TrustZone kernel would allow us to hijack any QSEE application! Then, once we control the KeyMaster application, we can simply use it to leak the HMAC and encryption keys and call it a day.

Recall that in the previous blog post we reverse-engineered the mechanism behind the invocation of system calls in the TrustZone kernel. Doing so revealed that most system-calls are invoked indirectly by using a set of globally-stored pointers, each of which pointing to a different table of supported system-calls. Each system-call table simply contained a bunch of consecutive 64-bit entries; a 32-bit value representing the syscall number, followed by a
32-bit pointer to the syscall handler function itself. Here is one such table:

Since these tables are used by all QSEE trustlets, they could serve as a highly convenient entry point in order to hijack the code execution within the KeyMaster application!

All we would need to do is to overwrite a system-call handler entry in the table, and point it to a function of our own. Then, once the KeyMaster application invokes the target system-call, it would execute our own handler instead of the original one! This also enables us not to worry about restoring execution after executing our code, which is a nice added bonus.

But there's a tiny snag - in order to direct the handler at a function of our own, we need some way to allocate a chunk of code which will be globally available in the "Secure World". This is because, as mentioned above, different QSEE applications cannot access each other's memory segments. This renders our previous method of overwriting the code segments of the Widevine application useless in this case. However, as we've seen in the past, the TrustZone Kernel's code segments (which are accessible to all QSEE application when executing in kernel context) are protected using a special hardware component called an XPU. Therefore, even when running within the TrustZone kernel and disabling access protection faults in the ARM MMU, we are still unable to modify them.

This is where some brute-force comes in handy... I've written a small snippet of code that quickly iterates over all of the TrustZone Kernel's code segments, and attempts to modify them. If there is any (mistakenly?) XPU-unprotected region, we will surely find it. Indeed, after iterating through the code segments, one rather large segment, ranging from addresses 0xFE806000 to 0xFE810000, appeared to be unprotected!

Since we don't want to disrupt the regular operation of the TrustZone kernel, it would be wise to find a small code-cave in that region, or a small chunk of code that would be harmless to overwrite. Searching around for a bit reveals a small bunch of logging strings in the segment - surely we can overwrite them without any adverse effects:

Now that we have a modifiable code cave in the TrustZone kernel, we can proceed to write a small stub that, when called, will exfiltrate the KeyMaster keys directly from the KeyMaster trustlet's memory!

Lastly, we need a simple way to cause the KeyMaster application to execute the hijacked system-call. Remember, we can easily send commands to the KeyMaster application which, in turn, will cause the KeyMaster application to call quite a few system-calls. Reviewing the KeyMaster's key-generation command reveals that one good candidate to hijack would be the "qsee_hmac" system-call:

KeyMaster's "Generate Key" Flow

Where qsee_hmac's signature is:

This is a good candidate for a few reasons:

The "data" argument that's passed in is a buffer that's shared with the non-secure world. This means whatever we write to it can easily retrieved after returning from the "Secure World".

The qsee_hmac function is not called very often, so hijacking it for a couple of seconds would probably be harmless.

The function receives the address of the HMAC key as one of the arguments. This saves us the need to find the KeyMaster application's address dynamically and calculate the addresses of the keys in memory.

Finally, all our shellcode would have to do is to read the HMAC and encryption keys from the KeyMaster application's global buffer (at the locations we saw earlier on), and "leak" them into the shared buffer. After returning from the command request, we could then simply fish-out the leaked keys from the shared buffer. Here's a small snippet of THUMB assembly that does just that:

Shellcode which leaks KeyMaster Keys

Putting it all together

Finally, we have all the pieces of the puzzle. All we need to do in order to extract the KeyMaster keys is to:

Enable the DACR in the TrustZone kernel to allow us to modify the code cave.

Write a small shellcode stub in the code cave which reads the keys from the KeyMaster application.

Hijack the "qsee_hmac" system-call and point it at our shellcode stub.

Call the KeyMaster's key-generation command, causing it to trigger the poisoned system-call and exfiltrate the keys into the shared buffer.

Read the leaked keys from the shared buffer.

Here's a diagram detailing all of these steps:

The Code

Finally, as always, I've provided the full source code for the attack described above. The code builds upon the two previously disclosed issues in the zero-to-TrustZone series, and allows you to leak the KeyMaster keys directly from your device! After successfully executing the exploit, the KeyMaster keys should be printed to the console, like so:

Currently, the script simply enumerates each password from a given word-list, and attempts to match the encryption result with the "scrypted intermediate key" stored in the crypto footer. That is, it passes each word in the word-list through the Android FDE KDF, scrypts the result, and compares it to the value stored in the crypto footer. Since the implementation is fully in python, it is rather slow... However, those seeking speed could port it to a much faster platform, such as hashcat/oclHashcat.

Here's what it looks like after running it on my own Nexus 6, encrypted using the password "secret":

Lastly, I've also written a script which can be used to decrypt already-generated KeyMaster key blobs. If you simply have a KeyMaster key blob that you'd like to decrypt using the leaked keys, you can do so by invoking the script km_keymaster.py, like so:

Final Thoughts

Full disk encryption is used world-wide, and can sometimes be instrumental to ensuring the privacy of people's most intimate pieces of information. As such, I believe the encryption scheme should be designed to be as "bullet-proof" as possible, against all types of adversaries. As we've seen, the current encryption scheme is far from bullet-proof, and can be hacked by an adversary or even broken by the OEMs themselves (if they are coerced to comply with law enforcement).

I hope that by shedding light on the subject, this research will motivate OEMs and Google to come together and think of a more robust solution for FDE. I realise that in the Android ecosystem this is harder to guarantee, due to the multitude of OEMs. However, I believe a concentrated effort on both sides can help the next generation of Android devices be truly "uncrackable".

Thank you very much! I'll create a feature request right away. Apart from the information in this blog post (regarding the structure of the KDF and how the key can be validated), is there any additional information I should add to the feature request? (Perhaps the crypto footer from my encrypted disk image and the corresponding keys, so that we can make sure the code works as expected).

* name of the algorithm/feature* mention where the algorithm is currently being used (software, web app etc) or why the feature should be added* source code or source code link* mention any restrictions/limits or default lengths (maximum salt length, password length, …)* several full example hash/plain pairs in case of a hashing algorithm

Especially the password lengths and salt lengths would be good to know and if there's any way for a user to modify existing configuration settings like iterations for cipher bitsizes.

However, I was primarily contacting you because I want to make sure the hash format is "standartized". The idea is to use your tool as it is but when it comes to the slow scrypt part, instead of cracking, it exports the required data so that hashcat (or any other tool) can work with it. Please add a suggestion, too

so by using this code you can break most android devices with relative ease? well the encryption part and then brute force the pin/password.... the brute forcing would be insanely quick wouldn't it? :/ this basically means android devices aren't secure at all

so by using this code you can break most android devices with relative ease? well the encryption part and then brute force the pin/password.... the brute forcing would be insanely quick wouldn't it? :/ this basically means android devices aren't secure at all

Is CopperHeadOS encryption hardening is a good mitigation for this vulnerability?

"Encryption and authentication

Full disk encryption is enabled by default on all supported devices, not just those shipping that way with the stock operating system.Support for a separate encryption password

In vanilla Android, the encryption password is tied to the lockscreen password. That’s the default in CopperheadOS, but there’s full support for setting a separate encryption password. This allows for a convenient pattern, pin or password to be used for unlocking the screen while using a very strong encryption passphrase. If desired, the separate encryption password can be removed in favor of coupling it to the lockscreen password again.

When a separate encryption password is set, the lockscreen will force a reboot after 5 failed unlocking attempts to force the entry of the encryption passphrase. This makes it possible to use a convenient unlocking method without brute force being feasible. It offers similar benefits as wiping after a given number of failures or using a fingerprint scanner without the associated drawbacks.Support for longer passwords

The maximum password length is raised from 16 characters to 32 characters."

This app appears to work similarly to the CopperheadOS in decoupling the encryption PW from the lock screen PIN or pattern. I'm interested to hear any opinions on this app as it claims to work with any device where the CopperheadOS only works with a handful.

The CopperHeadOS solution is a good one, given that you do, in fact, choose a long complex password. BTW, it uses a feature of Android that's actually supported behind the scenes (vdc cryptfs changepw). This is the same feature that's used by Nikolay's application.

So the attacker still needs to brute force the PIN/PW. Therefore the best (perhaps only?) defense against this is to utilize a long and complex alphanumeric PW that will take an exorbitant amount of time to brute force attack, correct?

Yes, if your password is strong enough, it could be infeasible to crack. Admittedly, mobile devices make this a little hard (who wants to type a long unique password on a mobile keyboard? Also, right now the lock screen password is the same password used for FDE. Imagine typing out a long password every time you unlock the device...)

Multifactor Security... Something you know, something you have, something you are, someplace you are, etc. will always be more secure than just a password. PARTICULARLY if you get a trillion chances to guess.

If I read this correctly, any specially crafted exploit code would need to either be signed by Qualcomm or an OEM, or would need to take advantage of an unpatched vulnerability on the device. Is that correct? If so, how is this different from iOS (beyond there being more players involved)?

In Apple's case you can't extract the hardware key no matter what software attack you carry out (w/ the OEMs or without). At most, you can bypass the software protections, but you'll still have to perform the bruteforce attempt strictly on the device.

1) How do you run the code for the Qualcomm exploit on a locked device?2) With i OS being closed source, how do you know what goes on in the black box secure element? Why is it impossible to get the hardware key? You were only able to figure out most of this with Android being open source.

if i set a long alphanumeric password and use my finger print to unlock the device would this still be vulnerable?if i set my finger print to unlock the device i wouldnt have to type my password everytime

Very fascinating post. There is one thing which I don't quite understand: "Patching TrustZone vulnerabilities does not necessarily protect you from this issue."

Suppose, (1) TrustZone is patched, and (2) the KeyMaster keys are derived from new and different hard-coded strings, and (3) user device is re-encrypted using the new KeyMaster keys. Wouldn't that fix the downgrading attack? P.S., I noticed that you used the phrase "not necessarily", so I want to confirm with you that this problem can be solved by proper patches.

Regarding your proposed patch - that still won't work. Remember that although I only extracted the generated keys in the exploit, you could just as easily modify it to use any pair of hard-coded strings as parameters for the KDF and to leak that result back to the Non-Secure World (since we have arbitrary code-exec in KeyMaster).

That said, a good patch could fix the issue by using a different scheme to bind the intermediate key (for example, using the hardware AES engine to encrypt the IK1 instead of KeyMaster - this has been proposed by people on droidsec). The problem is that any such patch would require a code change in Android (and some logic to handle transitioning from the old KDF to the new one), and that would break the cross-OEM compatibility of cryptfs.

Thanks for your reply. I see the problem now. My understanding is that the KeyMaster keys are leaked and cannot be fixed by patching due to the reality. So the only approach is to protect the application key (e.g. FDE key) via some other way.

I am wondering if your attack framework can be easily adapted to attack other sensitive data that are (supposedly) being protected by the hardware backed keystore, such as the application created keys in the Android keystore. Do you know the answer?

Another interesting asset would be the fingerprint template, which I guess is more likely to be protected by the SHK directly (hopefully).

No problem. First of all, I've written a script which can be used to decrypt any keystore key (incl. application created keys). You can find it here: https://github.com/laginimaineb/android_fde_bruteforce/blob/master/keymaster_mod.py

As for the fingerprint template - I believe it should be easily readable using a TrustZone exploit, since the fingerprint application is a trustlet which reads the fingerprint and matches it up to the stored template. I would do so myself but unfortunately I don't own a device with a fingerprint sensor.

1. If you can sign an image (you are an OEM/have the OEMs private key) - you don't need to bypass the lock screen.2. If you can't sign an image, but can acquire the disk image (using forensic tools/a flash reader/special fastboot oem commands/recovery/etc.), then you can acquire the image, factory reset the device, and now run the full exploit chain to extract the keys and attack the encryption off the device.

Would the key then be the same as last time since it was derived from the SHK instead of it directly? Or is it always constant? I wish I could test it out but I'm not a programmer and while I did compile your code I can't figure out how to execute it or package it. I'm assuming the same binary you used to test your Nexus 6 would work on my Nexus 7 2013 (no longer have my N5 to test with)? Could you send it to my email joe&jstech.us? & for at symbol?

I suppose it works only for Nexus 6 platform. So called 5X commands of the Widevine trustlet are exploited to read/write arbitrary address of the secure world. Nexus 5/7/5x/6p never have such 5X commands. Hence you are not able to run that exploit on your Nexus 7.

This exploit was written on my personal Nexus 6. As for your second point - are you referring to the rollback prevention? If so, remember that all Nexus devices don't have it (to enable flashing any factory image), and many OEMs don't either. It's up to OEMs whether to enable it or not.

Sorry, I can't see how this blog post relates to unlocking bootloaders.

Depending on how Samsung's bootloader is locked, perhaps you could use the TrustZone vulnerabilities that I've disclosed in order to bypass secure boot, but I haven't researched this and so can't provide a definite answer.

I wonder about your philosophy(or conception) behind the act of sharing this code(scripts , mainly) with everyone. Raising awareness to this issue is important. But this is an unfixed issue, which is definitely explosive and might be used by "script kids" . And it requires not a tiny effort to make it work (based on just the blog).

1. I believe in this case publishing the research could help push for a stronger FDE scheme. Currently, the advertised feature is not working as people would expect it to. Being a security-focused feature, this could have serious consequences for someone relying on FDE for their own personal safety. At the very least, people have the right to be informed. Had there been a fix on the way, I might have postponed publication. However, as you can see, the issue remains, and is present in all current devices. To the best of my knowledge this situation doesn't seem like it's going to change anytime soon.

2. Research needs to be (fully) verifiable. Just as any academic research paper must publish the complete method along with the results - security research shouldn't be any different. It's hard to do research in vacuum, and publishing results along with methods should help alleviate that.

I have also done the following:1. Downloaded the USB drivers (within Android studio and zip file) 2. Started in bootloader mode (power button and volume down)

So here are my main questions:1. If you don't have USB debugging turned ahead of time and you're trying to communicate to the phone via USB how do you make that happen? Am I assuming too much why saying communicating via USB? 2. What tools are used to run the Python scripts on the android phone? 3. What makes that handshake happen without previously setting something up on the phone?

I'm pretty lost and I swear I've given it my best, but the knowledge gap is huge, just trying to learn. Throw a dog a bone! Thanks! Please help. :)

Thank you for the kind words. You can find the start address of the secapp region by inspecting the dmesg output - the region is a part of the kernel dtb, and its address is printed when the device boots.

hi，laginimaineb，There is one thing which I don't quite understand，how do you know that In Apple's case you can't extract the hardware key no matter what software attack you carry out (w/ the OEMs or without). I don't think the use shk is the key point.if the the encryption scheme（kemaster）implemented with software that possibly can be hacked,you are still to get the shk.There is no evidence to prove the encryption scheme（kemaster） in ios run in a isolate hardware different from the implemention of tee in arm.

Regarding the UID key - Apple claims in the iOS security guide that it is implemented in HW and cannot be accessed via SW or FW in any way. IIRC, Comex has also verified this by inspecting the AES engine.

As for the SHK - that key is implemented as part of the crypto engine. As the engine's inner working are completely opaque, it's hard to say whether or not it can be extracted by a SW attack. To the best of my knowledge, this should not be possible.

Just one simple question, the key inside the crypto engine is the same for all devices? If yes, and taking into account the two strings are hardcoded, this will mean that all devices will generate the same Master key?

Thanks again for the reply. I figured out /dev/block/platform/msm_sdcc.1/by-name/metadata is the crypto footer partition for Nexus 6.

I have another question, could you please help me with this?

I flashed the vulnerable firmware image, and leaked the Encryption key and HMAC key using your exploit. Now I am testing whether it is possible to brutefore pin of a patched Android version say Android 6.

1. So I flashed the Android 6 to my device2. set a new password.3. extracted the crypto footer from it

Then I was trying to bruteforce the password of Android version 6 using your script, but it failed.

1. First it threw invalid HMAC mismatch! error -> Reason for this could be that this version of firmware is using a different HMAC key? correct?

I anyway ignored the HMAC validation part.

2. Then it threw "invalid encryption key" error. -> Does this mean this version of firmware is using a different encryption key? From your blog I understood that the hardware keys cannot be changed; however, they might have changed the salt used to create the master from hardware key for this version? Making it to impossible to bruteforce pin for a patched version? This is a valid patch from Google, correct?

Now we need to either find a new trustzone kernel vuln to leak the encryption key for latest firmware version. Reversing the Keymaster trustlet and figuring out the new encryption key used for a new version is not possible, right?

Yes, you're right that it's the "metadata" partition, I forgot about that.

I think the reason you're seeing errors is *not* that the key is somehow changed (remember, the SHK is *constant*), but rather that the format of the KeyMaster keyblob may be different in the newer versions. Remember that what I demonstrated was for KeyMaster version 1, which was what was used at the time. I believe the format used in newer versions is slightly different.

Here's what I suggest: take the key-blob and output the second DWORD (version_num), and post that value here. Then we could see whether it's just a matter of adding support in the script itself, or whether the KDF in the KeyMaster TA has been changed in newer versions.

It says, key blob version is '0'. Here is the complete output. https://gist.github.com/anonymous/561e6c94bc565f56c48b5b57cc0e684e

Also, I see keyblob version as '0' (same) in the vulnerable version (Android 5.1). But private key exponent size is huge for Android version 6. Gist for vulnerable version of Android is here https://gist.github.com/anonymous/b7d11bb76b95a5f03e482ad55dadd039.

Yes, I understand SHK is constant, but here we are using KEK. So, KEK might have changed for the newer version of Firmware, right? May be they might have just changed two hardcoded strings used to generate KEK from SHK.

Yes, these are all valid options. My initial thought was that the keyblob format must have changed. Anyway, I looked at the trustlet in the newer version, it seems to use the same hardcoded strings, so unless something around the key derivation changed, this is rather strange.

I tried to download the metadata partition in the link you sent, but it already expired. Could you send it over again? I'll take a look.

This is an incredibly informative article and I admire the effort you've put in to find this exploit. I'm trying to execute it on a Nexus 6P running Marshmallow (specifically 6.0 security patch level Nov 1, 2015). I've compiled your code and have it running on a rooted device. However, it seems to be failing to find the widevine application in the secure memory region. I've modified the start address of the secure memory region per my dmesg output:

It looks like the scan just can't find where it's located in the secure memory region? Am I missing other values I should modify? Is the Nexus 6P being 64bit perhaps causing issues? Would really appreciate some help on this.

Have you checked whether the Widevine trustlet on the Nexus 6P is vulnerable to the PRDiag vulnerability? If I recall correctly, the affected code was removed from the trustlet that shipped on the Nexus 6P.

Hello,when I encrypt my phone using a patched Android version, is the encrpytion still vulnerable for a downgrade attack by flashing to a vulnerable Android version? Then all these Qualcomm CPU´s would be vulnerable forever....

Other subject, i'm wondering without succeed to figure out where and how does Samsung stores the master key for file encrypting external sdcard on Galaxy S4? (i9505) - Android 4.4.2Do you have some idea about it? (keystore? - TIMA?)

The most effective method to Solve MySQL Password Issue through MySQL Technical Support In the event that you are new client on MySQL at that point confronting secret word issue in regards to MySQL is basic thing. This issue is for the most part looked by several of clients. On the off chance that you need to dispose of this issue than basically take after the guidelines which are given here: first you need to stop the administration after that begin the administration with banner to skip authorizations. Presently interface with MySQL and refresh secret key. By attempting these means you can without much of a stretch take care of your concern generally promptly contact to MySQL Remote Support or MySQL Remote Service for better help.

On the iPhone 4, the LCD and the front glass digitizer are fused together and can't be separated, so you would need to purchase a combined LCD/digitizer unit, even if the LCD is still fully functional.

JavaScript is supported by most of the widely used web browsers including Firefox, Chrome, Internet Explorer, Safari and Opera. It is also supported by the new web browsers whose vendors have implemented JavaScript, especially in email script. So the users can access the web applications using JavaScript regardless of their choice of web browser. They also have option to access all functionality of the website by enabling the scripting language if it is disable due to some reason.

Since the inception of Bitcoin in 2008, we at Trend News have been skeptical of crypto currencies' ability to survive, given that they present a very clear threat to governments who want to see and tax all transactions.

But while we may still be cautious on the actual crypto currencies, my mobile legend hero are very aware of the potential of the underlying technology that powers these electronic currencies. In fact, we believe that this technology will be a significant disruptor in how data is managed, and that it will impact every sector of the global economy, much like how the internet impacted media.

Great post man thanks for sharing this useful information but I was i serach for Jailbreak download finally i found one original and working PS3 Jailbreak & PS4 Jailbreak Games PKG for free follow the link to read more.

f you are looking for an PS4 Jailbreak then your search is over now as we are giving you a chance to jailbreak your PS4,Visit PS4 Jailbreak download

The Amazing product that meet your need to withdraw btc coin through world's bitcoin atm card it works as normal atm card at all over the worldThis can be also use to convert bitcoins into real money with current market rate at any ATM Machine.

Access to computers and other control systems which might provide you with information about the way the world revolves around technology should be unlimited and total. All information should be free and accessible to all. That is why we at INTEGRATEDHACKS have come come up with a team of highly motivated and dedicated hackers to help you get access to information you are being deprived of. Our services include and are not limited to hacking of social media accounts,email accounts, tracking of phones hacking of bank cards and many more. Have you ever been hacked? Need to recover your stolen account, Want to monitor your kids,spouse or partner, Change your school results track messages from an email or mobile number and many more, INTEGRATEDHACKS is the one for you. Hundreds of our clients have their phones, social media accounts, emails, servers, may bots and PCs hacked consistently and efficiently. Our professional hackers for hire team is highly qualified and can hack anything or device you desire without giving the target any form of notification which makes us one of the best.

The UK's most visited estate agents.Our Experts can help you buy, sell, rent and let property with branches across the UK. largest online letting agent uk .We will list your property on the main property portals such as Rightmove, Zoopla & PrimeLocations.let your property within the first 48 hours.largest online letting agent uk.

It's a thing that the PS4 does that you have to opt out of where it will display games on the home screen that Sony thinks you might like to buy. Its a global thing but clearly you have opted out of it and forgotten about it. ps4 jailbreak 6.20

One of the top website for organic SEO, where you can easily Drive More Traffic To Website within a month. Yes, it's time to increase your clients/visitors on your website and earn more and more money from your website or business. Visit for more details. drive more traffic to website