Android applications downloaded by as many as 185 million users can expose end users' online banking and social networking credentials, e-mail and instant-messaging contents because the programs use inadequate encryption protections, computer scientists have found.

The researchers identified 41 applications in Google's Play Market that leaked sensitive data as it traveled between handsets running the Ice Cream Sandwich version of Android and webservers for banks and other online services. By connecting the devices to a local area network that used a variety of well-known exploits, some of them available online, the scientists were able to defeat the secure sockets layer and transport layer security protocols implemented by the apps. Their research paper didn't identify the programs, except to say they have been downloaded from 39.5 million and 185 million times, based on Google statistics.

"We could gather bank account information, payment credentials for PayPal, American Express and others," the researchers, from Germany's Leibniz University of Hannover and Philipps University of Marburg, wrote. "Furthermore, Facebook, email and cloud storage credentials and messages were leaked, access to IP cameras was gained and control channels for apps and remote servers could be subverted." Other exposed data included the contents of e-mails and instant messages.

A Google spokesman declined to comment. There was no evidence any of the vulnerable apps were developed by Google employees, although the researchers said there are steps Google engineers could take to better ensure Android apps implement the encryption more securely.

"All things said, it's generally good research that should make developers more aware of these basic security deficiencies that shouldn't have made it through any respectable QA process," Jon Oberheide, CTO of mobile firm Duo Security, told Ars. "Needless to say, security isn't top of mind of most mobile developers."

The scientists began their research by downloading 13,500 free apps from Google Play and subjecting them to a "static analysis." Those tests checked whether the SSL implementations of the apps were potentially vulnerable to "man-in-the-middle" exploits, in which attackers are able to monitor or tamper with communications flowing over public Wi-Fi hotspots or other unsecured networks. The results identified 1,074 apps, or eight percent of the sample, that contained "SSL specific code that either accepts all certificates or all hostnames for a certificate and thus are potentially vulnerable to MITM attacks."

From the list of the 1,074 potentially vulnerable apps, the researchers picked 100 of them to subject to a manual audit that connected them to a network that used an SSL proxy to test whether the SSL implemented in the devices could be defeated. In some cases, the apps accepted SSL certificates that were signed by the researchers rather than a valid certificate authority. In others, the accepted certificates authorized a domain name other than the one the app was accessing. In still other cases, the apps were defeated by attacks including SSLstrip, which researcher Moxie Marlinspike demonstrated in 2009. Some apps also accepted certificates signed by authorities that are no longer valid. (It appears the Android operating system gives end users a means to manually disable various CAs.)

Example of vulnerabilities included:

An anti-virus app that accepted invalid certificates when validating the connection supplying new malware signatures. By exploiting that trust, the researchers were able to feed the app their own malicious signature.

An app with an install base of 1 million to 5 million users that was billed as a "simple and secure" way to upload and download cloud-based data that exposed login credentials. The leakage was the result of a "broken SSL channel."

A client app for a popular Web 2.0 site with up to 1 million users, which appears to be offered by a third-party developer. It leaked Facebook and Google credentials when logging in to those sites.

A "very popular cross-platform messaging service" with an install base of 10 million to 50 million users exposed telephone numbers from the address book.

While the researchers didn't identify the vulnerable apps, descriptions such as a "generic online banking app" suggest that most if not all of them were offered by third-party developers rather than the websites or services they connected to. Readers who are concerned their apps are vulnerable should start their inquiry by looking at those that are developed by outside firms.

Locking down Android

The paper lists a variety of ways SSL protection can be improved on the Android platform. One is for the type of static analysis they performed to be done at the time a user is installing an app. Another is to use a technique known as certificate pinning, which makes it much harder for an app or browser to accept fraudulent certificates like the ones used in the study. The researchers also recommended Google engineers develop new ways for Android to make it clear when the connection provided by various apps is encrypted and when it's not. Google may be equipping Android phones with their own malware scanner, recent reports indicate.

The paper made no attempt to measure the security provided by apps available for Apple's competing iOS platform. One possible reason the researchers focused on Android apps exclusively is that the openness of the Google platform made it easier to perform static analysis. That, in turn, made it possible to zero in on the apps with SSL implementations that exposed sensitive user data. It would be interesting to see the results of a similar analysis performed on the 13,000 most popular iPhone apps.

91 Reader Comments

It is extremely common for any SSL-using app, on any platform, (not counting the browser) to silently accept outdated, revoked, somehow invalid or even completely arbitrary SSL certs, because that's better than presenting an error to the user, right? Users hate errors.

Is it? The article just wonders at this, but you know it to be the case? Okay, if you'll just provide the proof, that'd be nice.

I hate the idea that if Android does something stupid, then it's sort of okay because iOS does the same thing, or vice versa. Giving a free pass isn't good enough. Google and/or developers need to do something about this, and if Apple and/or iOS developers are doing the same, they need to be hauled over the same coals.

As usual the media will keep silent on this issue because this news doesn't involve Apple.

(Edit: Oh wait.. the article did mention Apple!)

They will keep silent on the fact that it also effects Apple, and only mention it against Android.

Its the exact same thing that happened a little while ago with Dead Trigger. It went free on Android and they claimed that it was due to piracy and the media and the developers went on a huge anti-Android campaign. A week later when it went free on iOS, not a peep was heard.

Is it? The article just wonders at this, but you know it to be the case? Okay, if you'll just provide the proof, that'd be nice.

I hate the idea that if Android does something stupid, then it's sort of okay because iOS does the same thing, or vice versa. Giving a free pass isn't good enough. Google and/or developers need to do something about this, and if Apple and/or iOS developers are doing the same, they need to be hauled over the same coals.

Right, I understand that just asserting I am a professional security researcher does not count as strong proof, but I can't exactly fax you the NDA'd reports we make for our clients. So instead I will stake my professional reputation for you: my name is Melissa Elliott and I work at Veracode as an information security researcher. I am asserting that it is true that I see serious SSL issues with accepting invalid certificates in iOS apps on a regular basis comparable to Android, including in very popular apps with a good corporate reputation.

"The paper made no attempt to measure the security provided by apps available for Apple's competing iOS platform. One possible reason the researchers focused on Android apps exclusively is that the openness of the Google platform made it easier to perform static analysis. That, in turn, made it possible to zero in on the apps with SSL implementations that exposed sensitive user data. It would be interesting to see the results of a similar analysis performed on the 13,000 most popular iPhone apps."

Not to make light of potential security concerns but it seems a bit convoluted to condemn Android security because unlike other platforms it's readily measurable. Optimizing an alarmists headline is akin to punishing the platform for being open.

A possible reason is that the last time someone did that, Apple banned them from developing for the iOS market. It gives a chill to those who would should that Apple can do wrong.

For those whom don't remember the incident, Apple threw out Charles Miller for reporting a huge security flaw in iOS and after three weeks of Apple just not caring, went and did a proof-of-concept. Apple then decided it is easier to gag someone then admit there was a problem (they fixed it afterwards since it became well known.) http://www.gizmag.com/ios-security-blacklisting/20964/

What I find irritating is that researchers find these apps, yet don't let users know what they are. Thereby allowing people to continue using a compromised application.

With so many users on ICS and many of them stuck there, it is a disservice. Arguably, a majority of those affected would probably still remain unaware.

Since there seems to be some frustration with me in this thread that I can't namedrop the iOS/Android apps I've been involved with auditing: if it makes you feel any better, in most of the cases I've seen, it was because the company who actually made the app was looking for a security review, and was directly informed, and since they requested and paid for the security review themselves they tend to be inclined to do something about it.

Kinda satisfying to see an app you actually use work its way through the process and three weeks later an update with the simple description "bug fixes" turns up on the App Store. So things are getting better.

What I find irritating is that researchers find these apps, yet don't let users know what they are. Thereby allowing people to continue using a compromised application.

With so many users on ICS and many of them stuck there, it is a disservice. Arguably, a majority of those affected would probably still remain unaware.

Since there seems to be some frustration with me in this thread that I can't namedrop the iOS/Android apps I've been involved with auditing: if it makes you feel any better, in most of the cases I've seen, it was because the company who actually made the app was looking for a security review, and was directly informed, and since they requested and paid for the security review themselves they tend to be inclined to do something about it.

Kinda satisfying to see an app you actually use work its way through the process and three weeks later an update with the simple description "bug fixes" turns up on the App Store. So things are getting better.

Thank you for that bit of info. That's a little different than my inferrance that it was an independent audit. I'm glad that companies developing apps are taking steps to discover and, hopefully patch, vulnerabilities.

What I find irritating is that researchers find these apps, yet don't let users know what they are. Thereby allowing people to continue using a compromised application.

With so many users on ICS and many of them stuck there, it is a disservice. Arguably, a majority of those affected would probably still remain unaware.

Since there seems to be some frustration with me in this thread that I can't namedrop the iOS/Android apps I've been involved with auditing: if it makes you feel any better, in most of the cases I've seen, it was because the company who actually made the app was looking for a security review, and was directly informed, and since they requested and paid for the security review themselves they tend to be inclined to do something about it.

Kinda satisfying to see an app you actually use work its way through the process and three weeks later an update with the simple description "bug fixes" turns up on the App Store. So things are getting better.

Sounds nice until you think about it: why are these companies so cavalier as to treat such a security check as an afterthought to begin with? Heck of a way to reward those willing to take a chance on your app and become an early adopter - "now with free security holes for the first ten-thousand users!" I know it's not as simple as all that, but it's one of my primary reasons for avoiding apps for at least a few months after they're first released.

This is exactly one of the reasons why I don't consider Android a very secure OS. If I really need to transfer money between accounts, check my e-mail, use twitter, or anything else on a public network, I'm far more likely to open up a netbook running Linux and tunnel everything through SSH to my home server, which is secured with multiple firewalls. From the server, it's a direct wired connection to my ISP. It may not be the perfect solution, but I'll bet most ISP's networks are far less creepy than public Wi-Fi.

While everyone should pay attention to this article, and possibly take the numbers with a grain of salt, there's an even bigger problem facing smart phone users all over the world. In most of the developed markets, where our phones are starting to replace our wallets, smart phone thefts are on the rise. With the current state of Android, there really isn't much stopping a thief from pulling a ton of information off a given device in most cases. Using a lock screen? Connect to a computer and turn on "USB Storage" mode, or simply eject the microSD card. If the thief is tech savvy, or knows someone who is, they stand a good chance of bypassing the lock screen by rooting the device. Then they could get all the information they want through ADB. Granted, there are encryption apps for Android, but not many of them feature "auto locking", which cuts off access to the encrypted data after a short ammount of time. While I personally have no experience with Apple products, I believe it may be possible to pull sensitive information from iOS devices as well.

For those whom don't remember the incident, Apple threw out Charles Miller for reporting a huge security flaw in iOS and after three weeks of Apple just not caring, went and did a proof-of-concept. Apple then decided it is easier to gag someone then admit there was a problem (they fixed it afterwards since it became well known.) http://www.gizmag.com/ios-security-blacklisting/20964/

Way to completely misrepresent the facts. You should work for one of the campaigns.

Charles MIller was not thrown out for reporting a security flaw. Charles Miller was suspended for putting an app up on the store that actually exploited a security flaw. There's lots of ways Miller could have publicized having found that flaw without actually putting an app in the store.

For those whom don't remember the incident, Apple threw out Charles Miller for reporting a huge security flaw in iOS and after three weeks of Apple just not caring, went and did a proof-of-concept. Apple then decided it is easier to gag someone then admit there was a problem (they fixed it afterwards since it became well known.) http://www.gizmag.com/ios-security-blacklisting/20964/

Way to completely misrepresent the facts. You should work for one of the campaigns.

Charles MIller was not thrown out for reporting a security flaw. Charles Miller was suspended for putting an app up on the store that actually exploited a security flaw. There's lots of ways Miller could have publicized having found that flaw without actually putting an app in the store.

Does it matter that he was banned for uploading a PoC app? If the vendor refuses to address the vulnerability, you release it by any means necessary. It's a courtesy that Miller found and reported the vulnerability. If anything, he should have gotten a free pass for being the guy that reported it in the first place-- after all, he did spend his time and effort identifying the bug his PoC app exploited....and for free!

I'm not entirely happy with this article, if only because - as some comments show - these waters are very muddy.

If I'm reading this article properly, this is an Android problem in the same way that security issues with Adobe Acrobat Reader are a Microsoft Windows problem.

Microsoft didn't write it. They have no control over it. The fault is entirely with Adobe. But because the platform most people will run Acrobat Reader on is Windows, it's suddenly a "Windows problem".

Is it? The article just wonders at this, but you know it to be the case? Okay, if you'll just provide the proof, that'd be nice.

I hate the idea that if Android does something stupid, then it's sort of okay because iOS does the same thing, or vice versa. Giving a free pass isn't good enough. Google and/or developers need to do something about this, and if Apple and/or iOS developers are doing the same, they need to be hauled over the same coals.

Right, I understand that just asserting I am a professional security researcher does not count as strong proof, but I can't exactly fax you the NDA'd reports we make for our clients. So instead I will stake my professional reputation for you: my name is Melissa Elliott and I work at Veracode as an information security researcher. I am asserting that it is true that I see serious SSL issues with accepting invalid certificates in iOS apps on a regular basis comparable to Android, including in very popular apps with a good corporate reputation.

You may well be spot on in what you say, but if you're unable to show proof then you shouldn't post it as a fact. You believe that iOS security is as weak in this regard based on your experiences, but this survey reviewed 13,500 apps and I think your experience may not extend that far.

I completely accept you at your word, but then when I talk to a mechanic they tell me that cars are failing all the time, and when I talk to a doctor they tell me that people are always sick.

As I said, if someone can show the same study about iOS apps, I'd be more than happy to castigate Apple and iOS developers for being crappy with security. We're a long, long way past the Apple versus Microsoft days when talking about security, and there aren't really any excuses these days for poor implementations.

I need neither proof nor a wild guess. This type of security story has played out how many times, in how many areas of tech?

A subset of developers are lazy, and hackish, when it comes to securing apps and systems. This is human nature, and it is platform agnostic.

I'd reckon the 8% mentioned in this survey might be applicable to the market at large, regardless of platform.

For those whom don't remember the incident, Apple threw out Charles Miller for reporting a huge security flaw in iOS and after three weeks of Apple just not caring, went and did a proof-of-concept. Apple then decided it is easier to gag someone then admit there was a problem (they fixed it afterwards since it became well known.) http://www.gizmag.com/ios-security-blacklisting/20964/

Way to completely misrepresent the facts. You should work for one of the campaigns.

Charles MIller was not thrown out for reporting a security flaw. Charles Miller was suspended for putting an app up on the store that actually exploited a security flaw. There's lots of ways Miller could have publicized having found that flaw without actually putting an app in the store.

Does it matter that he was banned for uploading a PoC app? If the vendor refuses to address the vulnerability, you release it by any means necessary. It's a courtesy that Miller found and reported the vulnerability. If anything, he should have gotten a free pass for being the guy that reported it in the first place-- after all, he did spend his time and effort identifying the bug his PoC app exploited....and for free!

Not that this is on-topic, but, yes, the distinction does matter. It's outlined in the iOS Developer Program License Agreement, under section 8(c):

Quote:

8. Revocation

You understand and agree that Apple may cease distribution of Your Licensed Application(s) and/or Licensed Application Information or revoke the digital certificate of any of Your Applications or Your Passes at any time. By way of example only, Apple might choose to do this if at any time:

While it is indeed unfortunate that he was kicked out, the distinction is useful because Charlie Miller willfully broke the terms of that agreement. I do understand where you're coming from with regard to the "if the agency receiving the flaw does nothing, force the issue." But I haven't seen a link that outlines the timeframe between when Miller reported the issue to Apple and when he planted the app, but you may have a point if you can provide one and it demonstrates an unreasonable amount of time.

Otherwise, it should be noted that when news broke about the issue (which included reporting that Miller planted an app on the store), Apple pushed out a software update soon after addressing it. Reporting issues and planting an active exploit on infrastructure served by Apple available to end users are two wildly different things.

With regard to iOS, Apple has typically been responsive on the issue of security flaws, and they have been mostly improving on OS X. It may seem like small steps, but for a company typically tight-lipped about product flaws in general, Apple has made significant strides in the area of security and working the security community since '06 or '07. Yes, they do have a long way to go, but they are also improving. [/end-off-topic-post]

Goog-nix's ecosystem needs to do a better job of copying Apple's ecosystem. Then security will be improved. In the mean time pretending its 'more open' while its full of holes is a good strategy. :-P

Q: when will phone OS + all of the monitoring, snooping, registering code be available so folks can actually see how much is being collected where, and reported to whom. That'll help me see if there is hacked spyware in my Google phone introduced through the openness.

Is it? The article just wonders at this, but you know it to be the case? Okay, if you'll just provide the proof, that'd be nice.

I hate the idea that if Android does something stupid, then it's sort of okay because iOS does the same thing, or vice versa. Giving a free pass isn't good enough. Google and/or developers need to do something about this, and if Apple and/or iOS developers are doing the same, they need to be hauled over the same coals.

Right, I understand that just asserting I am a professional security researcher does not count as strong proof, but I can't exactly fax you the NDA'd reports we make for our clients. So instead I will stake my professional reputation for you: my name is Melissa Elliott and I work at Veracode as an information security researcher. I am asserting that it is true that I see serious SSL issues with accepting invalid certificates in iOS apps on a regular basis comparable to Android, including in very popular apps with a good corporate reputation.

You may well be spot on in what you say, but if you're unable to show proof then you shouldn't post it as a fact. You believe that iOS security is as weak in this regard based on your experiences, but this survey reviewed 13,500 apps and I think your experience may not extend that far.

I completely accept you at your word, but then when I talk to a mechanic they tell me that cars are failing all the time, and when I talk to a doctor they tell me that people are always sick.

As I said, if someone can show the same study about iOS apps, I'd be more than happy to castigate Apple and iOS developers for being crappy with security. We're a long, long way past the Apple versus Microsoft days when talking about security, and there aren't really any excuses these days for poor implementations.

Wow, is the voting here really representative of Ars? Putting forward that while someone's reasoning and theory appears sound, even acceptable but without proof or some method to substantiate the claim, its at best a well reasoned theory not absolute fact is greeted with down votes?

A subset of developers are lazy, and hackish, when it comes to securing apps and systems. This is human nature, and it is platform agnostic.

I'd reckon the 8% mentioned in this survey might be applicable to the market at large, regardless of platform.

I think the problem is worse for mobile apps. On most OS platforms there is at least one standard SSL library that applications can use and usually a standard list of trusted certificate roots somewhere. For example, on Windows (my experience is mainly with Windows dev so I can't talk too much about OSX or Linux libs) you can just get schannel to set up an SSL tunnel and validate the certificate chain against the installed root list for you leaving you just to do the most basic check - the hostname.

However, it looks like mobile platforms lack useful standard OS SSL libs forceing app developers to find their own and then hack them down to the bare essentials and/or reduce their root list to prevent the download being too big. If app devs have stripped out all the higher layer support and root list and tried to do it themselves it's hardly supriseing they have skipped basic checks...

I have no doubt that many applications on iOS, Windows, Linux, whatever all suffer from the same problem. Security is hard, and most people have no idea what they're doing. The modern "let's reuse everything philosophy" is very, very dangerous since security is determined by the weakest link. Poor examples everywhere don't help either: for example, a sample line of code from the Java 6 API docs, on how to create a Cipher:

Cipher c = Cipher.getInstance("DES/CBC/PKCS5Padding");

Yes, Sun is suggesting DES in the 21th century, which can be cracked in a couple hundred days on a $1000 desktop or a couple hours on a $100000 FPGA array.

Wow, is the voting here really representative of Ars? Putting forward that while someone's reasoning and theory appears sound, even acceptable but without proof or some method to substantiate the claim, its at best a well reasoned theory not absolute fact is greeted with down votes?

Sorry, that's not a fair conclusion at all. There is a world of difference between "I think x because it makes a lot of sense" and "I know x because of information I cannot share". In the former example the OP can be correct, misinformed, or a liar but in the latter (if we discount the possibility they are a liar) then we must accept they are better informed than we are. We can still question the quality of their methods or whether their data is really representative (and should as we cannot judge for ourselves) but that does not mean we can just dismiss it as mere "well reasoned theory" it is at worse unverified 1st hand account. Not all of us have the luxury to see all the evidence available to our peers/collaborators 100% of the time and are often forced to work with unverified information with the understanding that it will be published & peer reviewed or just become public knowledge at a later date.

I think the problem is worse for mobile apps. On most OS platforms there is at least one standard SSL library that applications can use and usually a standard list of trusted certificate roots somewhere.

I don't know about iOS, but Android does have a list of trusted certificates the user can edit, and by default that list is checked and if the check fails the connection isn't allowed to complete. It requires a bit of extra work to bypass these checks:

If you are going to bypass it, then I'd guess the right procedure is to attempt to connect letting the system do the check, and if that doesn't work throw up a warning dialogue and ask the user if they wish to accept an unverifiable certificate. The extra code would be all of 100 lines.

The virus checker case is perplexing. They have a known cert they have to authenticate against. If you are going to do your own certificate store checking, why not do the extra 10% and add you own cert to the store rather than just accepting all and sundry.

... Charles Miller was suspended for putting an app up on the store that actually exploited a security flaw. There's lots of ways Miller could have publicized having found that flaw without actually putting an app in the store.

There's only one way to prove that you can successfully put a compromised app in the store -- by actually putting a compromised app in the store. Miller did what he had to do, he should have gotten a pass for it if security was more important than the appearance of security.

If anything, being more often in the spotlight of security analyses gives Android a better chance of fixing the holes than an OS that isn't, just like doing regular health check-ups gives a person a better chance in discovering and treating a disease.