Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

siliconbits writes "BBC News has shown how straightforward it is to create a malicious application for a smartphone. Over a few weeks, the BBC put together a crude game for a smartphone that also spied on the owner of the handset. The application was built using standard parts from the software toolkits that developers use to create programs for handsets. This makes malicious applications hard to spot, say experts, because useful programs will use the same functions."

In many ways mobile phones are more secure than desktops. Sandboxes for apps, strong permissions schemes, app certification etc. But to counterbalance that, they have new facilities as standard that are more dangerous if compromised. Mobile phone charges, SMS, GPS, microphone, camera etc.

I agree mobile apps are already by default more secure than desktop apps, but that is why the process of allowing the user to remove some security blocks is even more crucial to get right. Because a mobile app is only as secure as the degree to which you maintain the security blocks around it.

On a computer you have can a firewall - you can't on a phone.Also for Android, because googles stupid design, if an app wants to include adds it needs to have internet access. So everything wants to go on the internet.

What they should have done was have an OS module which returns the adds, so the app didn't need internet access.

What they should have done was have an OS module which returns the adds, so the app didn't need internet access.

Yes, they should have... but "Hindsight is 20/20" applies here: Android is 2 years old, so it couldn't be held accountable, until Apple came out with the idea as a major OS / phone paradigm and loosed it to the public a couple months ago.

"A very obvious tell-tale sign on the phone is all of a sudden your battery life is deteriorating," he said. "You wake up one morning and your battery has been drained then that might indicate that some of the data has been taken off your phone overnight."

*snicker* Quick! Put more data in my phone to charge it back up!

I can see where he's coming from with that though, a smart-smartphone would conserve battery whenever possible by powering down components and so constant active use could drain the battery quicker. But any GOOD malware would only send out data at regular intervals, not all the time, so this would be a useless check. A BAD malware author would learn this pretty quickly after he DDoSs his own servers.

What's the difference between "malicious" and "beneficial", when it comes to software?

Just about every "malicious" action that malware takes is not "malicious" for what it actually does (set cookies, record passwords, send data in response to user actions, create accounts, encrypt things). All of these things are also functions you sometimes want software to do. The maliciousness is in who data gets sent to, whether it does one thing when it presented another thing in the UI, or if it's not announced. Therefore, how can you programmatically tell malware from not-malware? You can't. And therefore, if the user has the ability to install software, all you have to do to get malware onto a device is lie about it.

Malware isn't defined by what it does. It's defined by deception and lack of consent, and only by deception and lack of consent.

And if you want widespread adoption of your malware? Just wait. Make the "trojan" part of the malware (the game, app, etc.) useful, and do ONLY that part, for a while. Don't start stealing passwords until 6 months later. Include the encryption-extortware in the 3.2 update. Cache the keystrokes and send them only when you embed a keyphrase in your product website, and upload them during an "expected" transaction such as an upgrade or content download. Build the reputation for trust and the block of reviews saying "it's never caused me trouble", then cash it in all at once.

Short of human review of the software in question prior to general availability, you're screwed. (Even then you might be, as human review isn't infallable, but it's certainly not useless) With this in mind, whether you agree that it's worth the hassle/restrictions or not, isn't Apple's AppStore strategy just a little more understandable from an objective point of view?

Maybe it's not ALL about moustache-twirling and staking out new liver donors. Maybe, just maybe, at least part of Apple's "walled garden" motives are benevolent. Maybe it's not a simple question, but a complex one, requiring not simple answers, but complex and rigorous thought. And maybe it's not black-and-white, but shades of gray with the weighting different for every user.

Apple's walled garden does nothing to prevent the kind of malware you described. They don't actually inspect an app's code, they just run it (in an emulator presumably) and see if it does anything they don't like. Getting hidden malicious functionality through the approval process would be a cinch.

Of course it's possible to hide malware in an application and get it into an App Store. However the value of the single App Store approach is demonstrated by the very example you use. The app was only on the web store for a few hours. It was removed when the fact that it wasn't as described was made known. And presumably the developer has been permanently barred from the store.

In a more open system, where anyone can run an app store, it would be practically impossible to stop the malware appearing and reapp

What's the difference between "malicious" and "beneficial", when it comes to software?

From the user's point of view, the threats are modeled rawther well on the Bitfrost page [laptop.org]. But from a platform owner's (e.g. Apple, Microsoft, Sony, Nintendo) point of view, the threats are anything that would either tarnish the brand or compete with the platform owner.

I gave my sons their own computer when they were in elementary school. At the time, it was somewhat rare and they were excited by it. They had internet access which I vaguely watched... (meaning checking for porn) and all seemed well.

Keep in mind that I had NEVER had problems with pop-ups and malware or any of that before simply because I instinctively knew better as do many people here on slashdot. (Not many of us had to learn the hard way... we pretty much already knew... what? install this program to see the video? WE don't fall for that one... but many do!) So it didn't occur to me that my sons were not yet as skeptical as I.

So yeah... Bonzi buddy. They found this cute thing and installed it and it was fun for them to play with. It told jokes and they could type things in for it to say. Before long, the computer was doing things they didn't tell it to do. I remember the first time my younger son rushed downstairs to tell on his older brother for having naked pictures on the computer screen! The older followed behind closely and explained that they just started appearing out of nowhere! (Pop-ups! I had HEARD about them but never saw them before at the time!)

So I reloaded the machine, let them install Bonzi buddy again and before long it was happening again. Didn't take me long to realize what Bonzi buddy was up to. Sad part was that Bonzi buddy attracted kids and exploited them with along with the adults.

In short, there's nothing new or revolutionary in your idea. It has been done a lot already.

In fact, Microsoft did that too. They could have secured their OSes from being copied from the very beginning. Instead, they used piracy (free copying) as a means of distribution to choke out the competition. Then, once they achieved the "critical mass" their revealed secret documents spoke of, they started locking their software down more and more. It's not like free copying wasn't a problem from the beginning... it's just that it was also useful in the beginning and stopped being useful once their ends were achieved.

Yeah, and neither to private insurance companies, retailers, etc. Face it -- everyone is going to exploit whatever power they can when ever they can for as long as they can and only children believe otherwise. Some people are just way better at it than others.

Yes, let's hand over to the government the power to dictate to everyone what they can and can't do with the hardware they bought.

Err, that's not what he said.

What he said was that if I release a piece of software, it's digitally signed so that it can be tracked back to me. If the application does something malicious, I can be identified and am thus on the hook.

This doesn't stop anyone from installing software. All it does is facilitate accountability.

You sign up for iPhone development and give them your name and address.

And I'm certain that Apple checks to make sure that those names and addresses are completely legit.

Of course, I also believe in the Easter Bunny.

A couple of years ago, I used one of my developer discounts to buy a machine for a co-worker. We had it shipped to his house. For the next six months, when I signed on, my account listed my first name and his last name.

Oh, but you can always look up the info? Here's a copy of Hitchhikers Guide to the Galaxy [apple.com] [Redirects to iTunes]. Go click on "Jeffrey Beyer Web

And I'm certain that Apple checks to make sure that those names and addresses are completely legit.

Why is that so hard to believe?

If you are selling any app, they have to get bank contacts from you, and it cannot be just any bank - they have to support SWIFT codes, which means a pretty large bank. Between the two things Apple has a pretty good lock on who you are.

For free apps they do not require a bank account but they do verify your address.

In addition to the bank checks that the other poster mentioned, you also have to supply them with tax information, and company incorporation documents if applicable. The process too a few weeks for us, and entailed a few phone calls and physical mail in both directions.

Apple certainly knew we were more than a made up name before we were allowed to upload our first app.

I'll open with a disclaimer: most of my smartphone experience and awareness is centered around Android phones. That said, this article is yet another with a standard theme: "Remember, you stupid public, that smartphones are still computers". This is another in the a set of articles about people who write phone applications requesting a smorgasbord [wikipedia.org] of permissions, receiving them from the user, and using them maliciously. Put simply, this is another in the formulaic series:

Mystique of Computers * Fear of Malware * Novelty of Phones = Profit

Chris Wysopal, co-founder and technology head at security firm Veracode, which helped the BBC with its project, said smartphones were now at the point the PC was in 1999.

No offense, but Chris Wysopal is an idiot. Modern smartphones run every application in a sandboxed per-application environment with fine-grained permission controls that are, to some degree, opaque to the user. These applications, by a well-defined default, must exist in a central repository managed by a powerful authority and receive realtime user reviews. This is nothing like PCs in 1999 (remember, that was Windows 98). Then again, he's certainly quite biased, as his company [veracode.com] makes a living certifying applications.

All of the information-stealing elements of the spyware program were legitimate functions turned to a nefarious use.

Yes, of course they were. BBC didn't actually do anything innovative, like find an exploitation or break out of the sandbox. They just abused the OS's granted privileges to the fullest extent. Is this actually a problem? Given any set of privileges and any degree of fine-grained control, you can still abuse whatever you're given to the fullest extent.

At least one fundamental thing failed here: the user installed a phone game that requested privileges [android.com] such as:

SEND_SMS: Allows an application to send SMS messages.

INTERNET: Allows applications to open network sockets.

READ_CONTACTS: Allows an application to read the user's contacts data.

READ_OWNER_DATA: Allows an application to read the owner's data.

... to name a few

As the owner and user of the device, it is ultimately your responsibility to determine what software you install on your phone. If you are downloading a single-player game that asks for these kinds of permissions, you had damned well better check out the source of that game. If it's not a company that you are comfortable trusting and you still install it, then you are (frankly) stupid. BBC does, of course, presume that its users are stupid.

But that's the problem... no amount of protection will allow stupid people have free access to a computer and remain protected. You have to strip away something from one of these factors... either whittle down free access or reduce the base of stupid users. Better design models only serve to decrease the thresholds required for either.

Is there an inherent issue with those kinds of permissions being available and grantable? Sure, there is! Applications, especially closed-source ones, are effectively black boxes. The permissions that I am presented with at installation-time are, in fact, my only real insight as to what the application is capable of doing. Arguing for a finer grain of control is pointless, though. Regardless of what permissions are grantable, you will never circumvent the fundamental problem that stupid users will blindly install applications. Presenting them with more information will not change that fact.

It is the job of the OS vendor (Apple, Google, RIM, etc.) to declare a set of permissions that reasonably mitigates the dangers of overly-gener

To be fair, the BBC is one company that even a lot of skeptical, careful people would think they could trust. I don't have the app, so I'm not sure how it was listed, but if it said BBC, I could see how people would tend to trust it.

To be fair, the BBC is one company that even a lot of skeptical, careful people would think they could trust. I don't have the app, so I'm not sure how it was listed, but if it said BBC, I could see how people would tend to trust it.

Absolutely, and that is a wonderful part of the system. If BBC actually released this application maliciously under their trusted name, and anyone found out what it was doing, then BBC would face a hailstorm of complaints, bad press, and lost trust. This would almost certainly affect its bottom line.

Users trust BBC precisely because they have a lot to lose by betraying that trust.

As this Slashdot I guess I shouldn't be surprised that no one RTFA. This was an app written by a BBC journalist for the express purpose of testing how vulnerable the platform was so that he would write a story about it. It was never uploaded onto any app store, he only tested it on one single development phone... his own.

If you have to rely on that, the system will not work. Users don't want to, and will not be "educated" to. They want to buy and use something. You can't make users do something they don't want to, any more than force everyone to carefully listen to the flight attendants on an airline explain the safety procedures beforehand.

And frankly, I do not see that as unreasonable.

I like the Android security model with fine grained permissions but do not like how you agre

Instead, when a user opens an app they should be asked at the time of access to a resource if it's OK to access that resource. Now here I'm sure you start to be reminded of Vista UAC and innumerable "Are you sure" dialogs. But I don't mean every tine, I mean only once or twice and then the app is granted that permission permanently.

Yes it means that an app could potentially do something later on after being granted some permission. But it also would block a lot of obviously wrong things from working, like opening a media player and then being asked if it's OK to SMS a big ol' number you do not recognize.

You mentioned the shortcomings yourself; this wouldn't stop any serious malware author. They would either wait out whatever "trial period" you impose, or find a clever way [computerweekly.com] to masquerade their malice to seem innocent. With application models like these, you really can't beat around the bush, and solutions that try and mitigate will only find their limits probed, explored, and worked around.

If you have to rely on that, the system will not work. Users don't want to, and will not be "educated" to. They want to buy and use something. You can't make users do something they don't want to, any more than force everyone to carefully listen to the flight attendants on an airline explain the safety procedures beforehand.

Education isn't as impossible as you seem to think it is. It is a compromise between the vendors and the users. I'll use

You mentioned the shortcomings yourself; this wouldn't stop any serious malware author.

Just because it cannot stop all attacks does not mean it should stop none. It's still a better solution.

The key to security is defense in depth, and that means improving any one system when you can, because the system as a whole benefits from it. And it provides a much greater tangible awareness from the user about particular points of access to resources, which in turn is inadvertently providing the very education you

Just because it cannot stop all attacks does not mean it should stop none. It's still a better solution.

The key to security is defense in depth, and that means improving any one system when you can, because the system as a whole benefits from it. And it provides a much greater tangible awareness from the user about particular points of access to resources, which in turn is inadvertently providing the very education you wanted to give the user in the first place!

If I create a system with known limitations, someone seeking to exploit that system will take those limitations into account. Your idea serves to change the behavior of the malware; not to inhibit it or diminish its effectiveness. You're effectively placing hurdles in the path of the malware and hoping it gets tired of jumping them. Even worse, you are making your user base jump those same hurdles! Your entire premise is based on the idea that the software (or malware authors) will get tired before your use

I am taking hurdles that are already there and making the user remove them when it makes sense, instead of making them sign a paper saying they are OK with hurdles located somewhere distant being removed because they are an eyesore.
You are never PLACING hurdles in front of anything. Instead you should only ever be selectively removing system access controls when and where it makes sense.

I see what you're saying, but that methodology has the serious limitation in that it annoys the user. Furthermore, the security derived from your implementation is directly proportional to how many times you annoy the user - namely, the threshold that you set. At an extreme (pop-ups every time an SMS gets sent) it is very much more secure than the current implementations, since the user will immediately notice unsolicited SMS messages. However, anything short of that adds basically nothing, as all the malwa

I see what you're saying, but that methodology has the serious limitation in that it annoys the user.

Only very slightly - I have direct experience with the system since it's what iOS uses to grant apps permission to use location. It works well because you only see the dialog twice, after that it's silent - that system is a great balance between over-tasking the user with questions and simply putting up a dialog that people are going to accept no matter what. And since you see it around the task you are t

Accidentally posted this anonymously, actually wanted to spend some karma on this one.

You are completely missing the point. The point is that it is feasible for an application that *does* appear to legitimately need these permissions (e.g. an improved SMS application, of which there are many). There is no way for the user to specify that the only SMS messages that can be sent are those ones that they wish to send, or that the OWNER_DATA permission should only be used to read data required for the applicatio

You are completely missing the point. The point is that it is feasible for an application that *does* appear to legitimately need these permissions (e.g. an improved SMS application, of which there are many).

Is it? Android lets you share data via SMS without needing your application to request SMS privileges via Intents (see my last comment to SuperKandall for way too much more information on that. This covers the vast majority of SMS-related use cases.

Granted there are still applications that may want to send SMS on their own (like Google Voice), but those applications ought to be scrutinized heavily when they request that particular permission. Who is their author? What do they do? Does it make sense for such

This is totally a non story. Man tries to write proof of concept malicious phone app. There is so little content in the story, the BBC can easily re-use this story again and again without worrying about it losing relevance. Any vaguely competent programmer could have easily done whatever they did (don't bother checking the article they don't explain anything). The sad fact is, there probably really are thousands of "hackers" out there trying to write malicious apps and we should all be careful with security