Facebook: iOS-based credential theft only works on lost or jailbroken devices (Updated)

Facebook has responded to a report claiming that an attacker could exploit the …

Are the Facebook apps for iOS and Android susceptible to exploits that allow attackers to steal credentials used to log in to the social networking site? Facebook has largely discounted a recent report claiming as much, stating that such an exploit only works when users have modified their operating systems or granted an attacker physical access to their devices.

The attack was first proposed by app developer Gareth Wright, who discovered that a Facebook configuration file kept on iPhones and other Apple devices stored the cryptographic tokens that apps use to authenticate themselves. He later told The Register that Android devices probably suffered from a similar weakness that also stemmed from the failure to cryptographically secure the token.

On Thursday, however, Facebook said the exploit can't be carried out unless users jailbreak or mod their devices or an attacker physically connects to the phone.

"Facebook's iOS and Android applications are only intended for use with the manufacturer provided operating system, and access tokens are only vulnerable if they have modified their mobile OS (i.e. jailbroken iOS or modded Android) or have granted a malicious actor access to the physical device," a Facebook spokesperson told Ars.

"We develop and test our application on an unmodified version of mobile operating systems and rely on the native protections as a foundation for development, deployment and security, all of which is compromised on a jailbroken device."

iOS uses a protective sandbox to prevent applications from accessing .plist files and sensitive data used by other apps. Google's Android uses a file-permissions system that restricts each app to its own file directory, according to Accuvant principal research consultant Charlie Miller. As long as the mobile OSes haven't been modified, those protections should remain intact and prevent attackers from accessing the tokens on the device.

Wright confirmed to Ars that his attack would work only if someone uses a jailbroken version of iOS or an adversary is able to plug the targeted device into hardware that is able to siphon the iDevice's Facebook .plist (property list file) that is readily available in storage.

"An iOS device only has to be plugged into a PC or Mac for a couple of seconds to have its plists copied," Wright wrote in an e-mail. "Jailbroken or not, they're equally vulnerable."

He devised a proof of concept attack that used a background app running on a shared PC that captured the login credentials of any iOS device that connected to it. The scenario, he said, is "something that happens a lot at universities and workplaces as users charge their devices." He envisioned a determined attacker using a speaker dock or other piece of hardware that could use modified firmware to do much the same thing.

Update: On Friday, almost 24 hours after Ars published this report, Wright amended his post to say that the attack won't work on non-jailbroken Apple devices that are passcode-protected.

"They are safe when they connect to a PC or Mac that the user has never synced to before as long as a user doesn't unlock the device while still connected," he further explained in an email. "An unlocked device connected over USB is still affected."

34 Reader Comments

Of course, if the token was "cryptographically secured", the FB app would need to have the private key, which would be easy to capture from disassembly and debugging tools, and because they key would be the same for everyone (or algorithmic ally generated on a per-device basis by the same algorithm), the "exploit" would still be there.

If somehow has root and hardware control of a device, good luck keeping anything on it secure.

Of course, if the token was "cryptographically secured", the FB app would need to have the private key

If their servers have the private key and the client has the public key / cert then the token could be generated by the server, validated by the client and later handed back to the server which could validate, etc. that token was one they originally generated. No private key is needed on the client.

The problem is if the token is stored and doesn't have a lifetime enforced for it then it could be copied out and used.

Of course, if the token was "cryptographically secured", the FB app would need to have the private key

If their servers have the private key and the client has the public key / cert then the token could be generated by the server, validated by the client and later handed back to the server which could validate, etc. that token was one they originally generated. No private key is needed on the client.

I'm not following you. That is exactly how the system currently works. However, the client stores the token locally so users don't have to log in every time they use the app.

I do not believe there is a "secure" (by this researcher's standards) way to enable auto-login by storing *anything*, token or not, encrypted or not, on the client. If it's data, it can be copied if you have root access. If it's encrypted data, it has to be decrypted by the client app.

The problem is if the token is stored and doesn't have a lifetime enforced for it then it could be copied out and used.

Which is exactly the situation we have with the session cookies used by everyone accessing Facebook via a web-browser.

I don't see the big deal here with this article. The article is essentially say that if your host-OS is compromised, and/or a malicious entity has physical access to it, they can steal your Facebook credentials. The exact same situation is true with web browsers, and likely every platform used to connect to Facebook.

How do these people keep getting press? Every "hack" that starts with either "you must have already rooted your system" or "let's say they have physical access to the device" isn't anywhere near the same as a remote hack, which is what you should really worry about.

You might want to mention that Android 4.0 has full phone encryption, and that without the passcode an attacker with physical access would be thwarted. Doesn't help applications with root permissions, of course.

You might want to mention that Android 4.0 has full phone encryption, and that without the passcode an attacker with physical access would be thwarted. Doesn't help applications with root permissions, of course.

You might want to then mention that many carriers and OEMs still don't give their customers (nor will they ever) the ability to upgrade to ICS.

Also, if you have physical access to that device, the encryption is mostly a gimmick. The key is way too short and can be trivially brute forced offline.

Also, if you have physical access to that device, the encryption is mostly a gimmick. The key is way too short and can be trivially brute forced offline.

There's no length on key limit (other than the obvious one balancing usability and security), but assuming a slow enough key derivation function, I don't see why. Even an 8-digit PIN should take several tens of days to crack.

(I'm not a crypto expert, but I know that slowing down the process works.)

You might want to mention that Android 4.0 has full phone encryption, and that without the passcode an attacker with physical access would be thwarted. Doesn't help applications with root permissions, of course.

I'm sure Ars can contact all 7 people using Android 4.0 to let them know .

Also, if you have physical access to that device, the encryption is mostly a gimmick. The key is way too short and can be trivially brute forced offline.

There's no length on key limit (other than the obvious one balancing usability and security), but assuming a slow enough key derivation function, I don't see why. Even an 8-digit PIN should take several tens of days to crack.

(I'm not a crypto expert, but I know that slowing down the process works.)

I'm not either, but not I'm curious. Android uses dm-crypt, which uses LUK[1]. The default with LUK is to time it to 1 second on the system that sets that passphrase (the phone itself).[2] I doubt Android phones will use GPU for this (if the GPU would even help), so it shouldn't be too hard to build a specialized brute forcing rig with a desktop GPU or three.

Also, if you have physical access to that device, the encryption is mostly a gimmick. The key is way too short and can be trivially brute forced offline.

There's no length on key limit (other than the obvious one balancing usability and security), but assuming a slow enough key derivation function, I don't see why. Even an 8-digit PIN should take several tens of days to crack.

(I'm not a crypto expert, but I know that slowing down the process works.)

I'm not either, but not I'm curious. Android uses dm-crypt, which uses LUK[1]. The default with LUK is to time it to 1 second on the system that sets that passphrase (the phone itself).[2] I doubt Android phones will use GPU for this (if the GPU would even help), so it shouldn't be too hard to build a specialized brute forcing rig with a desktop GPU or three.

Also, if you have physical access to that device, the encryption is mostly a gimmick. The key is way too short and can be trivially brute forced offline.

Wait, so you're "not a security expert", but are claiming that AES-128 is unsafe and can be trivially brute forced offline? I mean sure AES is broken and requires only ~2^126 operations to break, but I'd love to see your setup to trivially brute force that.

Well you can try dictionary attacks and so on on the password (that's not the "key" though) itself, but if that's your claim, why not change it to "Every system ever invented is unsafe"? That works depending only on the used password..

Also, if you have physical access to that device, the encryption is mostly a gimmick. The key is way too short and can be trivially brute forced offline.

Wait, so you're "not a security expert", but are claiming that AES-128 is unsafe and can be trivially brute forced offline? I mean sure AES is broken and requires only ~2^126 operations to break, but I'd love to see your setup to trivially brute force that..

No, the set of keys neoscsi's talking about (anything derived from a relatively short PIN or password) is not the entire AES key space. It's a valid concern.

No, the set of keys neoscsi's talking about (anything derived from a relatively short PIN or password) is not the entire AES key space. It's a valid concern.

Then he's not talking about brute forcing the "key", but the password. And since there's no limit on the password as far as I can see, that applies to absolutely every cryptographic system ever invented.

If you have a passcode set on the iPhone those Facebook attack scenarios via a shared computer, dock or lost phone won't work (the plist storage will be encrypted)

Maybe something worth adding to the article.

People who care anything about the security of their data should set really have set a passcode.

gkdot, I checked with Gareth Wright, the app developer who developed the proof of concept and he says protecting an iDevice with a passcode does not prevent the attack.

"I have passcode on all my devices, plists aren't encrypted," he says.

Then I regrettably have to say he's not familiar with how passcode encryption works on iOS.

If you connect a passcode locked iOS device to new computer or dock it will not allow access to application files, including the plists. For this to be allowed you would need to connect the IOS device at least once in an unlocked state.

If you try this you'll see a message in iTunes asking you to unlock the device before continuing. Tools like Phone Disk, iExplorer or even using Linux's libimobiledevice will not be able to see the device either.

Once you do unlock the iPhone/iPad and connect to the computer the keys will be exchanged and stored for future use. From then on the computer may access those files using the stored key regardless of lock state. This is probably what Mr Wright is seeing.

However without that crucial key exchange from an unlocked device the scenario will not work. So, connecting a passcode locked iOS device to a shared computer/rogue dock just to charge would not give an attacker access to the files.

No, the set of keys neoscsi's talking about (anything derived from a relatively short PIN or password) is not the entire AES key space. It's a valid concern.

Then he's not talking about brute forcing the "key", but the password. And since there's no limit on the password as far as I can see, that applies to absolutely every cryptographic system ever invented.

Only those where the private key is protected by a password, and an adversary offline access to perform a brute-force attack. has Android gestures have a maximum length, and generally if people use PINs they'll be four digits.

No, the set of keys neoscsi's talking about (anything derived from a relatively short PIN or password) is not the entire AES key space. It's a valid concern.

Then he's not talking about brute forcing the "key", but the password. And since there's no limit on the password as far as I can see, that applies to absolutely every cryptographic system ever invented.

No. For example, it doesn't apply to TLS because the key space for symmetric encryption is truly the entire possible space of 128/192/256 bits.

neoscsi: you can't use a gesture to encrypt an Android phone. (My Nexus S has been running ICS for several months.) You need to use a PIN or password. You're correct, though, that many people will set a 4-digit PIN.

No, the set of keys neoscsi's talking about (anything derived from a relatively short PIN or password) is not the entire AES key space. It's a valid concern.

Then he's not talking about brute forcing the "key", but the password. And since there's no limit on the password as far as I can see, that applies to absolutely every cryptographic system ever invented.

No. For example, it doesn't apply to TLS because the key space for symmetric encryption is truly the entire possible space of 128/192/256 bits.

neoscsi: you can't use a gesture to encrypt an Android phone. (My Nexus S has been running ICS for several months.) You need to use a PIN or password. You're correct, though, that many people will set a 4-digit PIN.

Interesting. I assumed they would convert a gesture to numeric equivalents.

If you have a passcode set on the iPhone those Facebook attack scenarios via a shared computer, dock or lost phone won't work (the plist storage will be encrypted)

Maybe something worth adding to the article.

People who care anything about the security of their data should set really have set a passcode.

gkdot, I checked with Gareth Wright, the app developer who developed the proof of concept and he says protecting an iDevice with a passcode does not prevent the attack.

"I have passcode on all my devices, plists aren't encrypted," he says.

Then I regrettably have to say he's not familiar with how passcode encryption works on iOS.

If you connect a passcode locked iOS device to new computer or dock it will not allow access to application files, including the plists. For this to be allowed you would need to connect the IOS device at least once in an unlocked state.

If you try this you'll see a message in iTunes asking you to unlock the device before continuing. Tools like Phone Disk, iExplorer or even using Linux's libimobiledevice will not be able to see the device either.

Once you do unlock the iPhone/iPad and connect to the computer the keys will be exchanged and stored for future use. From then on the computer may access those files using the stored key regardless of lock state. This is probably what Mr Wright is seeing.

However without that crucial key exchange from an unlocked device the scenario will not work. So, connecting a passcode locked iOS device to a shared computer/rogue dock just to charge would not give an attacker access to the files.

Hope this description makes things a bit more clear.

I don't fully understand all the encryption and such, but this is correct. I recently got a new Mac Mini that I use to sync iTunes with, and my daughters iPod would not sync until she unlocked it the first time, then synced it. There may be some other trick, but me taking you iPod to my Mini, or any other Mac besides yours, will not unlock it.

Only those where the private key is protected by a password, and an adversary offline access to perform a brute-force attack. has Android gestures have a maximum length, and generally if people use PINs they'll be four digits.

As sid0 said you can't use gestures (also I've no idea what the entropy would be there anyhow), but why would anyone use a PIN on a modern smartphone? A password greatly increases the entropy and is just as convenient to use. I mean who in their right mind would think that a 4digit PIN was any kind of security guarantee except against the most basic attacks?

sid0 wrote:

No. For example, it doesn't apply to TLS because the key space for symmetric encryption is truly the entire possible space of 128/192/256 bits.

The point is, that if the way you generate the key is weak, the whole encryption suffers - no way around that. If I used a password to generate the TLS key, we'd have the same problem (truecrypt encrypts the partition with a symmetric key, but if you get the key that is used to encrypt the masterkey you've won - hence the weakpoint is again the password)

No, the set of keys neoscsi's talking about (anything derived from a relatively short PIN or password) is not the entire AES key space. It's a valid concern.

Then he's not talking about brute forcing the "key", but the password. And since there's no limit on the password as far as I can see, that applies to absolutely every cryptographic system ever invented.

Only those where the private key is protected by a password, and an adversary offline access to perform a brute-force attack.

Anyone who cares about security will have set the setting in iOS that wipes the device after 10 incorrect PIN entires.

"An iOS device only has to be plugged into a PC or Mac for a couple of seconds to have its plists copied," Wright wrote in an e-mail. "Jailbroken or not, they're equally vulnerable."

If someone were to steal a lockbox safe from under my bed, the safe becomes vulnerable when the thief has physical access to it, but I'm not complaining to the safe company to make their lock better so my money remains inaccessible. Physical access to the device always increases risk of information vulnerability.

""They are safe when they connect to a PC or Mac that the user has never synced to before as long as a user doesn't unlock the device while still connected," he further explained in an email. "An unlocked device connected over USB is still affected.""

Am I the only one who reads this and goes "huh?!?"

If you do not plan to unlock it, unless you are emergency charging why would you connect it?

Only those where the private key is protected by a password, and an adversary offline access to perform a brute-force attack. has Android gestures have a maximum length, and generally if people use PINs they'll be four digits.

As sid0 said you can't use gestures (also I've no idea what the entropy would be there anyhow), but why would anyone use a PIN on a modern smartphone? A password greatly increases the entropy and is just as convenient to use. I mean who in their right mind would think that a 4digit PIN was any kind of security guarantee except against the most basic attacks?

Because I never see people, even my most paranoid coworkers, running around typing in passwords even remotely complicated to unlocked their smart phones. For most of the world, the inconvenience does not out weigh the added benefit. Most people also assume though that if their phone is lost, everything on it could be lost too.

No, the set of keys neoscsi's talking about (anything derived from a relatively short PIN or password) is not the entire AES key space. It's a valid concern.

Then he's not talking about brute forcing the "key", but the password. And since there's no limit on the password as far as I can see, that applies to absolutely every cryptographic system ever invented.

Only those where the private key is protected by a password, and an adversary offline access to perform a brute-force attack.

Anyone who cares about security will have set the setting in iOS that wipes the device after 10 incorrect PIN entires.

Agreed, but we were talking about an offline attach against encrypted Android storage. In such a case (same applies to iOS also if it supported it) the phone's self-wipe mechanism would be bypassed.

""They are safe when they connect to a PC or Mac that the user has never synced to before as long as a user doesn't unlock the device while still connected," he further explained in an email. "An unlocked device connected over USB is still affected.""

Am I the only one who reads this and goes "huh?!?"

If you do not plan to unlock it, unless you are emergency charging why would you connect it?

While it's poorly worded, the idea conveyed is that someone stealing your phone and hooking it to their computer cannot compromise your credentials if you have passcode set. If they steal your phone and connect it to their computer, and you do not have a passcode set, your credentials can be compromised.