Posted
by
kdawsonon Monday May 17, 2010 @11:31PM
from the so-simple-a-kiddie-could-do-it dept.

Trailrunner7 sends along a ThreatPost.com piece that begins "The pace of innovation on mobile phones and other smart wireless devices has accelerated greatly in the last few years. ... But now the attackers are beginning to outstrip the good guys on mobile platforms, developing innovative new attacks and methods for stealing data that rival anything seen on the desktop, experts say. This particular attack vector — introducing malicious or Trojaned applications into mobile app stores — has the potential to become a very serious problem, researchers say. Tyler Shields, a security researcher at Veracode who developed a proof-of-concept spyware application for the BlackBerry earlier this year, said that the way app stores are set up and their relative lack of safeguards makes them soft targets for attackers. ... 'There are extremely technical approaches like the OS attacks, but that stuff is much harder to do,' Shields said. 'From the attacker's standpoint, it's too much effort when you can just drop something into the app store. It comes down to effort versus reward. The spyware Trojan approach will be the future of crime. Why spend time popping boxes when you can get the users to own the boxes themselves? If you couple that with custom Trojans and the research I've done, it's super scary.'"

I poked around the internets a bit and only found a mention or two for iPhone trojans. These trojans were ONLY on jailbroken iPhones, not un-jailbroken ones that are using the iPhone App Store. As far as I know there have never been any "banker" trojans in the iPhone App Store.

This article seems to be riding the coattails of the iPhone's popularity by throwing it in the mix with other platforms that have had "banker" trojans. If they have evidence of an iPhone App Store trojan I'd love for them to directly mention it rather than being vague and doing a lot of hand-waving.

Any app on the blackberry requires user intervention before it's allowed to fetch URLs, open raw sockets, read email, dial the phone, get your location, manipulate the address book, or do any other damned thing. And 90% of the APIs require the developer to be vetted through the app signing process. It actually seems much less vulnerable to trojans and spyware than a PC.

I was testing SSH clients for the iPhone so I bought about a half dozen, one of them flat out didn't work (filled out the problem form, no response). One didn't allow you to change the port to something other than 22. Only one app allowed you to import a key. Only one (a different one) allowed you to have more than one key. In other words one was completely broken, one was arguably missing basic functionality and all were missing common functionality. In other words the quality was abysmal.

I also tried to contact them, one had a website listed that was several years out of date and had no contact info (no names, emails, phone numbers, nothing). Not exactly inspiring of trust.

Based on this I can simply say I will not use them, for one thing they don't work terribly well. But mostly because who knows what they do in the background. Perhaps every 50th connection, assuming it is a Tuesday they send your connection details (user name, password, IP, etc.) in an outgoing packet to the bad guy that wrote the app.

I actually regret going with the iPhone (not that the android is much better in this respect). I'm so used to Open Source software having to use a closed source application from a basically unknown source (as opposed to someone who is at least known and ideally has a decent reputation they want to protect) is foreign to me and to be honest a deal breaker.

Android's Market tells you exactly what an app can and can't access before you install it. In order to access certain classes of API, the app has to include this access in its manifest file or the API's aren't available. Examples include location (there are two tiers: rough network-based, and precise GPS based), phone (again, two tiers: phone state [usually to do things like pause music when the phone rings], and the ability to place/receive calls), network access, storage (read or modify SD card contents), SMS, camera access, contact data, calendar, email, phone sleep functions, and so forth.

Certain accesses are considered sensitive, and will be specifically brought to the user's attention before they install the app. Other controls (such as access to the phone's vibrate function) aren't, and although you can look to see if the app uses those functions, you're not bothered to verify that this is ok first.

So if an app wanted to poach your phone number, etc. on Android, it would basically have to advertise to you that it's doing so or it wouldn't have that level of access.

The Application SandboxFor security reasons, iPhone OS restricts an application (including its preferences and data) to a unique location in the file system. This restriction is part of the security feature known as the application’s “sandbox.” The sandbox is a set of fine-grained controls limiting an application’s access to files, preferences, network resources, hardware, and so on. In iPhone OS, an application and its data reside in a secure location that no other application can access. When an application is installed, the system computes a unique opaque identifier for the application. Using a root application directory and this identifier, the system constructs a path to the application’s home directory. Thus an application’s home directory could be depicted as having the following structure:/ApplicationRoot/ApplicationID/During the installation process, the system creates the application’s home directory and several key subdirectories, configures the application sandbox, and copies the application bundle to the home directory. The use of a unique location for each application and its data simplifies backup-and-restore operations, application updates, and uninstallation. For more information about the application-specific directories created for each application and about application updates and backup-and-restore operations, see “File and Data Management.”

Important: The sandbox limits the damage an attacker can cause to other applications and to the system, but it cannot prevent attacks from happening. In other words, the sandbox does not protect your application from direct attacks by malicious entities. For example, if there is an exploitable buffer overflow in your input-handling code and you fail to validate user input, an attacker might still be able to crash your program or use it to execute the attacker’s code.

See also protections around location, camera, microphone, address book access, and network interfaces that "let users know in simple words what an application will do"

Even though I've already abandoned Apple, it's their belief that enough people won't do this that they can retain their clout. The industry as a whole is damaged as a result. Further it sets the precedent that a software company can dictate what other software you run on the same device for business reasons rather than for technical ones (i.e. we're not talking software incompatibility, we're talking rejection because they say so). Apple is the first, if they succeed, you can guarantee that other companies will be looking to shut out their competition simply by refusing to let you run the competition's software. The entire thing is creating an atmosphere of anti-competitiveness.

You're actually 2 decades late. Nintendo did this on the NES back in the 80's, with a lock-out chip [wikipedia.org]. Only Nintendo approved (and licensed) software could be loaded and run, at least without 'jailbreaking' the cartridge to circumvent this. Note: the world of open environments has not collapsed yet.

That said, we're talking about a cell phone, which never had the ability to run user software before anyway. If they want to do the same thing on a PC, then I would begin to worry.

It doesn't matter if I do it; if it's an important enough piece of software, somebody has. And if it's really important, more than a few somebodies. And if it's really really important, I can pay somebody to do it.

I'd like to introduce you to an important, relevant psychological effect known as the bystander effect [about.com]. The more important that something public is, the GREATER the chance that no one will take care of it because they all just assume "It's so important that someone must have taken care of it."

I'm not saying that open source is insecure, just that you can't automatically assume that it IS secure. Unless you personally look at the code or pay someone trusted to do so, you have to assume that it isn't secure.

The "noob" here is the person that blindly trusts other people to make sure everything is secure.

There was an app for the iPhone that billed itself as a contact backup application. The company took that information and used it to spam the contacts of anybody that had used the application. They may not have lied about the information the app was accessing, but they were unscrupulous with what they did with it. I'd call that malware.