Posted
by
samzenpuson Monday June 04, 2012 @04:48PM
from the protect-ya-neck dept.

Trailrunner7 writes "Google's Android platform has become the most popular mobile operating system both among consumers and malware writers, and the company earlier this year introduced the Bouncer system to look for malicious apps in the Google Play market. Bouncer, which checks for malicious apps and known malware, is a good first step, but as new work from researchers Jon Oberheide and Charlie Miller shows, it can be bypassed quite easily and in ways that will be difficult for Google to address in the long term. Oberheide and Miller, both well-known for their work on mobile security, went into their research without much detailed knowledge of how the Bouncer system works. Google has said little publicly about its capabilities, preferring not to give attackers any insights into the system's inner workings. So Oberheide and Miller looked at it as a challenge, an exercise to see how much they could deduce about Bouncer from the outside, and, as it turns out, the inside."

This is why I hate Android in the corporate environment. While I love open technology for personal uses, trying to manage corporate security with Android in the mix is a nightmare. I can have a nice pretty policy that makes upper management happy but I have no really good way of enforcing it. For the pain in the butt that Blackberry is, it was designed around corporate security. Apple is a step above Android in this regard, but it is still not designed with corporate use in mind.

And yet you don't have that pesky rooting problem, meaning that users have no way around the policies you need to enforce. Not to mention being able to enforce firewall policies that ensure your company data isn't accessible to any app. (Heck, you can even restrict them to a whitelist of allowed apps if you want.)

Even the partial root exploit found for the PlayBook has been corrected - and no new root is in sight. (Good thing since this is the foundation of their new platform.)

I think the parent want a little more than a lock screen policy. I want:

1) disable outside market instalation of apps2) disable installation of market apps or restrict them to a whitelist3) Be able to setup a corporate store for internal developed apps, this could work but you must enable installation of outside market applications (see 2)4) Lock Google accounts addition and removal

Not sure about the last one but RIM's BlackBerry Mobile Fusion [blackberry.com] should handle the first 3 for both iOS and Android. Combine that with their BlackBerry Balance [blackberry.com] and you have a pretty robust way to control most mobile devices while letting users keep their personal stuff separate but on the same device.

I think the parent want a little more than a lock screen policy. I want:

1) disable outside market instalation of apps
2) disable installation of market apps or restrict them to a whitelist
3) Be able to setup a corporate store for internal developed apps, this could work but you must enable installation of outside market applications (see 2)
4) Lock Google accounts addition and removal

So what you need to do is setup your own AppStore where you can place vetted apps by your IT staff, then for all the Android/iOS/WP7/BB devices out there that you own, you configure them to use your Corporate AppStore that is hosted on your network. Then lock out the ability to install other AppStores, and provide users with the ability to request apps from other app stores to be vetted and placed in your AppStore.

If need be, pickup a Cryogen mod and mod it for your phones to give you that capability.

It probably wouldn't take much work to create a custom image to do all of that. Modifying the package installer to require a certain key signature would take care of the first three. 4 might require a tiny bit more effort, but not by much. You'd never be able to convince your employees to cripple their own devices to meet these requirements, so you're looking at company owned devices no matter what, right? In which case you're buying a bunch of devices in bulk. You may as well spend a little extra and

Apple is a step above Android in this regard, but it is still not designed with corporate use in mind.

If you're referring to "Apple" (it's not clear to me if you are), iOS and iDevices are very suited to corporate security and corporate use, especially since iOS 4.0. You didn't give examples so I'll quickly mention the Configurator and data-at-rest encryption (hardware).

I can not believe that I just read on SlashDot that Google was simply relying on security through obscurity to protect Bouncer and the Android Google PlayMarket.

"research without much detailed knowledge of how the Bouncer system works. Google has said little publicly about its capabilities, preferring not to give attackers any insights into the system's inner workings."

Any advertising-supported will need at least "Coarse location" to determine which advertisements are relevant, and "Internet" to download newly placed advertisements, and "Device state and identity" to make a unique user identifier so that each user sees relevant advertisements. How would you recommend funding the development of a free (as in beer) application without those permissions? Not all countries have paid applications. Or if I misguessed to which permissions you were referring, then to which permis

The problem is that they are so vague about why the permission is needed. When presented with a list of things the app has permission to do, it should also list why the app needs this and what specifically the app is going to do with those permissions.

As an example I pulled up a free flashlight app, it needs the following permissions.

Since this is an app that turns on the flash on your phone as well as any other available lights so it does not need really any of the permissions it asks for, and you have no idea what it is going to use those permission for.

In this case since it is just a flashlight app it is very easy to tell it is asking for permission for things it should not be doing, but what do you do when the app you want asks for permission for things it would technically need, but you have no idea if it is going beyond what is needed for functionality vs more nefarious operations?

This is one area where Android is above iOS. If an app's perms don't allow it to read contacts, unless it comes with a built in exploiter like rageinthecage or zergrush, the app is not going to be able to touch the contact list.

However, as stated above, people see the app on the Play Store, tap "Download", see some writing, then tap the button again to get the app. Then, for most users, the "fun" begins. Of course, with a rooted phone and DroidWall and/or LBE Privacy guard, the malicious app isn't going

This is so obvious I'm surprised it has to be stated. It is no different from the situation in iOS either. Everyone always knew there were dozens of methods one can use to bypass these gatekeepers.

The problem is one of accountability. Apple, through iOS maintains a very interesting relationship with developers - should someone manage to sneak an app through, they can "out" that developer very easily because they have full billing details of that developer. If you know you're not anonymous in the App Store, you're a lot less likely to write malware when it can be traced back to you.

It's the foundation for Gatekeeper in Mountain Lion - here Apple will not vet the app, but they will request a small fee for a signing certificate. If you write malware and distribute it that way (because it's default on OS X), again it's easy to know who did it (or who didn't protect their keys).

Sure OS X will have the "full open" option as not default for open-source (though some non-GPLv3 projects are getting certs as well, e.g., Firefox) or developers (who would hopefully not try to break their own machines...).

That's the big difference - Apple takes care of a social problem via social means (do you really want to be credited with creation of malware?), Google's using technology to do it (via scanners and such).

It's also why SVN's "blame" tool is quite handy at keeping dud checkins from happening - build breaks are much less frequent and usually due to inadvertently missing a file or three rathe rthan checking in without compiling or testing (and yes, I've seen it happen. Someone checks in a quick fix without seeing a syntax error...).

So your point is Google should disallow gift cards for a developer account. If Apple knows how to disallow gift cards, I assume it be pretty easy for Google to get it done too. Anyone knows why they do not disallow these cards, or are they just dumb?

The problem is one of accountability. Apple, through iOS maintains a very interesting relationship with developers - should someone manage to sneak an app through, they can "out" that developer very easily because they have full billing details of that developer. If you know you're not anonymous in the App Store, you're a lot less likely to write malware when it can be traced back to you.

Because setting up false accounts or hijacking other people's accounts for the purpose of their activities is something

And here I thought researchers were looking for a way to break into the secret google night clubs. Everyone knows that's where all the cool nerds are.

Keeping this analogy, it does seem about as effective as an actual bouncer. While most drunken retards are being thrown out on the streets, the dangerous, more vile types get to stay inside and ultimately take drunk chicks home. I suppose it's nice to have less people throwing up on you, but getting stabbed at a nightclub is still getting stabbed at a nightclub. I suppose you could draw the argument that there's a pat-down and weapons check at the door, but let's be real, if you were going to bring a weapon to hurt someone in the first place, you'd be smart enough to hide it and get in.

If that didn't make sense to you (lack of cars, etc...), basically this means bouncer will only affect poor malware writers and the big-boys will just skirt around the security anyways. Which really means little, because I'd rather get rid of the big players and be stuck with a bunch of obvious annoyances than to remove the annoyances and have a false sense of security about my apps. I should give google credit though, at least it's a start. Hopefully by this time next year they'll have managed to match common sense 2014 in terms of malware protection.

While browsing the Google Play store, I have started to notice a number of apps that have 1000+ good reviews, all rather pithy like "Amazing", or "!!!".

You then tap "Download" to look at the permissions, and the app asks for everything under the sun, even though the app might be a game or a utility that does one thing, and has zero need to be able to read and write contacts.

Of course, for users who know what they are doing, stuff like this is as close to a Trojan as one can get, or at best some basic game coupled with a malware payload. However, for novice users who just want to use a phone and who think permissions are something to obtain from their teacher so they can go use the bathroom, the phrase, "babe in the woods" comes to mind.

I hate lobbing brickbats at Google since I like the Android ecosystem and Android phones. Android even has a stronger security model than iOS. However, Apple does one thing which precludes the need for that much security in iOS, and that is to be an active and stern gatekeeper. iOS devs don't get their app stomped, then one hour later turn up again with the same app under a different name.

Google needs to get on the ball and make two tiers of their Play Store. The first (default) tier would be like Amazon, where all apps are not just sent past a rudimentary scanner, but are actively vetted. This includes not just the original version of the app, but any updates, so malware can't be slipped in.

To boot, a higher fee is charged to play in this game, partially to offset the cost of the enhanced filtering, and partially to discourage people from making accounts and trying to palm off the same malware-ridden app under different names.

In the top tier, Google would need have some very stringent policies. For example, if an app gets rejected by account "A", submitting the exact same app under account "B" with slight changes mean that account "B" gets suspended for the first offense, and closed down for good after the second.

Of course, Google can keep their second tier (which would be the same as Google Play now), but maybe put up some sort of warning for a user that once they exit the vetted tier, they are essentially on their own, so do what is needed at their own risk. This tier is one step up from just downloading an app via a website and sideloading it, but it is better than no security.

Google needs to do something here, because the malicious apps are causing issues, not just in China, but here in the US. Already, Android's reputation is being tarnished by something that is not the OS's or hardware maker's fault, and Google needs to step up to the plate and do the role of active gatekeeper unless they want to see customers abandon the platform for ones with a better gate guardian, even though it means people buying far locked down devices.

This is why for the previously suggested "top tier", Google should finance the time and money it takes to run a tight ship with a higher developer fee.

This doesn't mean to stop offering $25.00 for access to the Play Store, but it means that having a segment of the market where things are thoroughly vetted and found suitable consumption. Most people reading/. have enough sense to check permissions and check for bogus reviews before downloading and running, but unfortunately, people assume if they download

Last time I checked, Android user adoption was accelerating. Can a crappy experience hurt sales, sure. Can Horror stories about malicious app X, or Y scare the crap out of some people using their smart phone of choice, sure. At the end of the day, if you're using an Android phone, or an IOS phone, a blackberry, or windows phone, if you allow applications to be run, someone will find a way of exploiting it. Everyone's information is at jeopary, and if you're not willing to make the assumption otherwise, than

I hate lobbing brickbats at Google since I like the Android ecosystem and Android phones. Android even has a stronger security model than iOS. However, Apple does one thing which precludes the need for that much security in iOS, and that is to be an active and stern gatekeeper.

The thing is, with Android being open, this can be done without Google doing it. Unlike iOS, Android isn't locked to a particular store. If there's demand for a pre-vetted store, and Google doesn't do it, anyone else can set it up. Personally, I don't think there will be enough demand, because the vast majority of users don't take security seriously. Google solution to the problem is free market; Apple's is much closer to central planning, where Apple does what they think is best for you (they may even be r

The key is to have the first store the user encounters, be it Google's, Amazon's, or an Android version of Cydia handle app security on a rigorous level. Of course, users should get the ability to swap to another store (and know the consequences of moving from an actively maintained store to a marketplace where malware enforcement is reactive, not proactive.) It is about expectations. People expect on iOS to be able to download everything and that it is safe for human consumption. Pretty much

The way to solve security issues for novices is by someone building a more secure store, which of course will have to be the default store and replace the App Market for the novice users to find it.

No, not really. If someone can install an app, they can install another app store (which is really nothing more than an app). Assuming novices can install apps (which is fairly reasonable or the whole argument is moot anyway) it's not a question of whether it's too difficult for novices, it's a question of whether they can be bothered. That's what I mean about demand - it's unlikely a large number of people will be motivated to change to a pre-vetted app store. It's the same sort of inertia that was seen wi

In what way do you think of the Android security model as "stronger"? It is at best equivalent, but since it includes things like attached storage as part of the fundamental system it has vectors of potential exploit iOS does not have at all.

The Android security model is also MUCH weaker in terms of real-world user security around device resources. Asking a user what permissions an app should be allowed before they run it makes no sense to me; far better is the iOS permission model where it asks on first

You got some good points there. Only problem is that google doesn't consider us Android users as customers, we're just eyeballs so that the real customers, Proctor Gamble, Nike etc can shove commercials at us. Google only cares about security in as much as they do not want it to be so bad that it stops more eyeballs from buying more ad-viewers-and-data-collecting-devices.
Not that this precludes them from working more on the security issues. I'm certain that they are. I am also certain that if we actually

And every time they're caught, the app will be pulled, uninstalled from people's handsets, and if the people want to continue malicious activity, they will need to pay another dev fee to make a new account and continue putting malware on the store. Malware authors typically operate on small margins from what I have read (no convenient sources, please if you have one post it), so the break-even point might be high enough that they can't make money on it.

It reminds me of an anti-spam solution proposed years and years ago: Make a new email system in which it costs a penny to send an email. This is low enough that normal users don't care, but high enough that spammers' conversion rates of 1/12,000,000 (from Wikipedia) aren't enough to let them keep spamming for V14GR4.

The issue is that the dev fee is so low, a malware writer can get accounts stomped left and right, pony up another $25, and have the same app in the Play Store in a matter of minutes.

Malware makes lots of money, so even if the app is just in the store for a few days before Google yanks it, it has likely gotten enough info (especially if the app knows what to look for on the SD card) to make it worth at least the $25, just in credit card fraud, bank fraud, or ID theft. Heck, even the classic scam of "OMG, I

And all of the sudden you simply forgot that no spammer ever sends the spams from his own computer, enforcing a send-mail tax would only hurt the millions of Windows users who are (unwilling and unknown to them) members of the spammers botnets.

Not if the accounting system for email cuts the users ability to send mail off when they hit a limit. For an average user 50 emails in a single day is probably enough, for even power users 500 is plenty. The user then has to log into something, and authorize another 50 or whatever.

Wow. At first I was going to not RTFA and just say this would be something we have all known for over 20 years: blacklist fingerprint malware scanners are a dumb idea and guaranteed to not work, whether we're talking about Windows PCs in 1995 or smartphones in 2015.

But actually Google is doing something pretty interesting. They run malware deliberately, inside a sandboxed emulator, and then target behavior rather than appearance (a signature). That's good. It's not rigorous but it's a good idea for fin