Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Sparrowvsrevolution writes "At the SysCan conference in Taiwan next week, Charlie Miller plans to present a method that exploits a flaw in Apple's restrictions on code signing on iOS devices, the security measure that allows only Apple-approved commands to run in an iPhone's or iPad's memory. Using his method, an app can phone home to a remote computer that downloads new unapproved commands onto the device and executes them at will, including stealing the user's photos, reading contacts, making the phone vibrate or play sounds, or otherwise using iOS app functions for malicious ends. Miller created a proof-of-concept app called Instastock that appears to show stock tickers but actually runs commands from his server, and even got it approved by Apple's App Store."Update: 11/08 02:54 GMT by U L : Not unexpectedly, Apple revoked Miller's developer license.

But one has to think, if this application was approved, how many other approved applications in the App Store have some form of malicious code or other surreptitious data collection?

It seems the only reason Apple noticed this is because Charlie Miller published it.

This is why Apple's security model is fundamentally flawed. It provides a single point of failure for security. Those of us who work with networks understand that gateway only security doesn't work, so trusting the gateway to get everything

This does sound like quite a serious security hole, so I expect it to be patched. Of course, slashdot will report the patching of this hole as "Apple patches iOS to prevent jailbreaking", just like the last time they closed the security vulnerability that was also used to provide jailbreaking ability.

If they don't close the hole, slashdot will crow about how "insecure" iOS is.

I'm waiting on a vendor coming up with a firewall program for your phone - think ZoneAlarm, where you are prompted to allow or block when apps request 'outside access'.

And if you make a version for the iPhone, it won't be approved.;)

(Of course, it is possible for iPhone users to install "disapproved" apps from other sources. But only a few knowledgeable people will do that, so you certainly won't make much money from your app that way.)

This isn't really news...I imagine this 'flaw' will be found in every version of iOS until it dies. Not only that, but we should be suspicious of app producers...they say "only install apps from trusted publishers"...yeah...ok...so, no one? If I did that, I'd have only the pre-loaded apps.

Actually it is. The way these things get fixed are by making people aware of the problem. No software is absolutely bug free. As much as some people would like to stick their fingers in their ears and say "la-la-la not a problem...", there are just as many us who would like to fix the issue. So, yes this is news.

Actually it was a flaw introduced last year when Apple relaxed restrictions, apparently to increase browser speed:

From TFA:

Miller became suspicious of a possible flaw in the code signing of Apple’s mobile devices with the release of iOS 4.3 early last year. To increase the speed of the phone’s browser, Miller noticed, Apple allowed javascript code from the Web to run on a much deeper level in the device’s memory than it had in previous versions of the operating system. In fact, he realized

Not much like that at all. Those were kernel level drivers that executed with system level authority. This is something that would execute in the user space and it only does so because they reduced some restrictions allowing it to execute in a lower memory space, but it certainly doesn't have root authority.

True. But the alternative to that is untrusted computing - ie. any app you install gets more control over the device than you.

The vast majority of users are not even remotely capable of providing a higher level of trust than a competent third party. This is akin to representing yourself in court instead of hiring a lawyer who is an expert in the laws and defence techniques that apply to your case. Step and repeat for each app you install.

Most of the article was quite puzzling, as this is nothing new or remarkable. It's really quite simple to have your application execute stuff it downloads.

If I can reverse-engineer the uninformative article a little, I would hazard a guess to say that he's found a way of bypassing the NX bit protection using Safari as an attack vector. This means that he would be able to inject arbitrary ARM code that wasn't present on the device at review time, meaning that he could execute code against APIs that the application wasn't originally using (but which are available for applications to use legitimately).

As an attack, it sounds real enough, however in real-world terms, Apple's review process is leaky enough to avoid getting caught anyway. Their review consists of some trivial automated checks and everything else is handled by a human reviewer who just looks at the application from an end-user's point of view. During the submission process you have to include instructions on how to trigger any Easter eggs in your application because they wouldn't otherwise find them.

If I can reverse-engineer the uninformative article a little, I would hazard a guess to say that he's found a way of bypassing the NX bit protection using Safari as an attack vector. This means that he would be able to inject arbitrary ARM code that wasn't present on the device at review time, meaning that he could execute code against APIs that the application wasn't originally using (but which are available for applications to use legitimately).

Nope, he wrote a Sleeper App (basically malware with trojan functionality) and put it up on the App-Store. Using the "backdoor" in the App, he could download, install and run unsigned code. Apps in the App Store run binary code. You don't need to inject code anywhere into a browser.

Also, what he did was EXPLICITLY AGAINST the developer agreement he made when he became an Apple Developer. He basically proved that you could write code with trojan functionality that violated developer agreements, lie abo

He basically proved that you could write code with trojan functionality that violated developer agreements, lie about the functionality to Apple, and get it published on the App Store. Apple found out and took his App down and then took away his developer license.

So iOS is secure against developers that tell Apple about the malware in their apps. That gives me a really warm, fuzzy feeling...

So iOS is secure against developers that tell Apple about the malware in their apps. That gives me a really warm, fuzzy feeling...

Yes... however, if Apple finds malware in an App, it is pulled from the App Store and the developer is banned. But anything you install could be potentially malware. Then again, I'd venture to say malicious developers can take advantage of *ANY* current software platform once you've installed their software.

Yes, when a "white hat hacker" like this Miller guy shows up and demos a security hole to Apple, Apple's response is to pull his app and ban him.

This is supposed to reassure us of iOS's security exactly how?

The intended effect seems to be to "send a message" to others who may be playing with such things. And that message is "Don't tell us about security problems you find; we don't want to hear about them. Go sell the info to interested buyers, like any self-respecting businessman would do."

So long as iOS apps are developed using a language that allows pointer access, including function pointers, people are going to find and exploit bugs like this. It's actually a really interesting parallel to homebrew development on Windows Phone (yes, I have one, in addition to a few Linux devices - no iOS ones though): you can do native code on WP7, but you have to use COM to access it. Microsoft prohibits ISVs from using the COM import API from C#/VB in marketplace apps, so they can very easily block this kind of thing by just checking for references to a few specific APIs (they also block the use of C# "unsafe" pointers).

Now, I'm not exactly advocating that Apple needs to re-design their entire applicaiton model. However, the fact remains that the way they do it, it's almost impossible to really verify that any given app isn't doing something like this, short of code-reviewing the source of every submission and rejecting any that are too hard to understand (completely impractical). It means they *are* vulnerable to malware, though - even from the "trustworthy" marketplace.

You know, your post would have a lot more credibility if you could spell "virtualization" correctly.

I was making a point about the validity, or lack thereof, of API-based trust boundaries (you know, what the whole article was about). It's entirely possible to make an API-based trust boundary in a language that doesn't support pointers. It's not possible in a language that does. You need something else to enforce your trust boundaries, or you need to accept that they will be vulnerable. Apple is taking the l

Did or did you not notice that the whole point of what Charlie Miller did was that the sandbox was breached, despite ASLR, and he was able to do it from an app allowed into the walled "solution"?

Please explain how an app store that is unable to detect malware but *claims* to be inherently secure is actually more secure? If anything, I see it as the opposite - it will delude people (like yourself) into thinking it's safe, when it's actually not. Android, by comparison, is acknowledged to have malware - meani

Did or did you not notice that the whole point of what Charlie Miller did was that the sandbox was breached, despite ASLR, and he was able to do it from an app allowed into the walled "solution"?

Please explain how an app store that is unable to detect malware but *claims* to be inherently secure is actually more secure? If anything, I see it as the opposite - it will delude people (like yourself) into thinking it's safe, when it's actually not. Android, by comparison, is acknowledged to have malware - meaning people need to be more cautious about the apps they install.

I think the numbers of actual malware on the two platforms speak for themselves. And in iOS' case, Apple-haters certainly can't claim "security through obscurity" or "lack-of-marketshare" excuses.

And I, for one, would rather have a guard who repels 99.99999999999999% of enemies, than me having to stay up every night with a shotgun in my hand, protecting my home and my loved ones.

Window screens don't stop all insects; but take them away, and pretty soon, all you'll have time to do all day, every day (and every night) is swat flies. Which would you prefer: The occasional gnat in your beer, or having flies crawling all over your dinner, every single day?

The app in question has already been pulled from the App Store. And I'm quite sure the flaw that allows executing code via some hole in Safari will be fixed very soon. iOS 5 supports delta updates now, so Apple can (and will) come with small updates much more often than in the past.

I'm still torn about security in such appliances. Ideally the user should fully own the device as well as all code running on it, but in practice, users being what they are, having a central control instance may very well be the

The app in question has already been pulled from the App Store. And I'm quite sure the flaw that allows executing code via some hole in Safari will be fixed very soon. iOS 5 supports delta updates now, so Apple can (and will) come with small updates much more often than in the past.

Unless he's figured out how to sign apps such that the OS thinks they are from Apple, and aren't. Then Apple would have to revamp their code signing system.

The app in question has already been pulled from the App Store. And I'm quite sure the flaw that allows executing code via some hole in Safari will be fixed very soon. iOS 5 supports delta updates now, so Apple can (and will) come with small updates much more often than in the past.

Unless he's figured out how to sign apps such that the OS thinks they are from Apple, and aren't. Then Apple would have to revamp their code signing system.

He clearly stated that he went AROUND the code signing requirement; NOT that he "broke" the signing process itself.

I'm still torn about security in such appliances. Ideally the user should fully own the device as well as all code running on it, but in practice, users being what they are, having a central control instance may very well be the lesser evil.

Let the end user decide whether they want the central control, or not. Just make sure that status can't be altered by other than the actual user.

With digital devices filling every part of my life now the very thought of being personally responsible for every bit of code running on every one of them makes me shudder. Life is just too short for that.

Do I trust Apple? Not very much. Do I trust Apple more than myself when I haven't got the time to spend more than a few minutes a day to care for each device (and its software) that I own and use? Probably, yes. Sad but true.

Is there someone else you trust? How about having a trust choice? And for those of us that do trust ourselves, "self" should be one of the choices.

I do trust Apple with one thing... that they will make business decisions that they believe will boost their bottom like. That is the only thing I trust them to do.

In theory, that's how it should work, by design. But when there's a bug in the code somewhere, that can provide a means to go around the checks. Too bad Apple inherited Steve Jobs' arrogance and refuses to work with security researchers.

It's not more secure (Charlie Miller keeps demonstrating that), but for the typical user (who doesn't know enough about security to judge an app), having a vetting/approval process such Apple's is still offers a safer environment than running completely unvetted apps (such as on the Android stores).

Except, it gives a false sense of security. With Android (or PC) apps, I know that there's a risk of malware, so I'm cautious. With iOS - well, I don't have one, but I imagine there are lot of people who think "it *can't* have malware, Apple checks everything!" and therefore completley trust anything in the app store.

The purpose of work like this is to demonstrate that Apple has misled those people; you can't simply trust everything. The only thing worse than an obviously untrustworthy app source is an untrustworthy app source that *appears* to be trustworthy.

Which makes absolutely no difference to the 95+% users who don't know enough about security to make such an evaluation. No matter how many times users get burned, if they don't understand security, most of them will make the same mistake next time simply because they don't know how to evaluate an app for security. And for those who do know about security, it doesn't stop them from exercising caution. Therefore, the "false sense of security" actually makes no difference.

So someone needs to watch out for that 95+%. Apple and Miller are both trying to do that. One of those two is even willing to cooperate with the other to that end goal. The other appears to be on the track to dishonesty over the matter.

Agreed. I'm a big fan of CM, and the rest of the ethical security researchers.

Apple's reaction to security vulnerabilities is pretty poor. I have personal experience with that since I reported a vulnerability in QT for Windows (CVE-2010-0530 [apple.com]) that they took over a year to fix, and didn't fix it properly when they did.

Apple isn't the only vendor to have such poor policies, just one of the most visible.

I think by coming here you insured that you are talking to the 5% that do care....

Which has absolutely nothing to do with my statement. My statement is about all users. That's the problem with most of the users on here, they can't see that most of the users aren't interested in the same things they are.

Except, it gives a false sense of security. With Android (or PC) apps, I know that there's a risk of malware, so I'm cautious.

And why do you imagine your caution is better than someone who's job is vetting apps? For example, what automated tools do you have for looking for suspicious API calls? Do you, like the app store reviewers, have test devices that don't contain your actual live data? Do you, like the app store, find out that the developer of the app is real enough to have a tax code?

It mostly comes down to using either apps from big names that are well-known and have a reputation to uphold, or using open-source apps. If I need an app that does neither, I can run it through a proxy and monitor what it connects to via my PC. Granted that the first approach isn't guaranteed, the second isn't guaranteed unless I both check the source and compile it myself for checking against the version in the app package, and the third is a hassle. It's possible, though - and I guarantee that the folks a

I guarantee that the folks at Apple don't have the time or people to properly verify the apps either, nor do they seem to have the personal incentive to do it right.

I know better that your "guarantee". The app store review process found a crashing bug in one of my apps that neither I nor my partner had ever come across in our testing. It took me two days to reproduce it myself. I know from what had to happen to trigger that bug that either he gave it a very thorough evaluation, or they have fuzzer that randomly operates the UI of the app for an extended period.

Also interesting is that you earlier pointed out the hazards of trusting software, and here you're willing to

Take the TSA as an analogy. One of their many jobs is to detect things like knives, guns, explosives and other nasty things being brought aboard airplanes. And they are pretty successful when people have forgotten that they have one of the forbidden items in their luggage. But if you make a bit of an effort to hide these things, they seem to have a poor success rate for detecting them.

Generally, most people have a pretty low opinion of the TSA's "Security Theater." It doesn't really

Flawed analogy. Forget that the TSA is searching for weapons when they need to be watching for suspicious behavior. Forget that they're irradiating passengers and groping others for their illusion of security.

The fundamental problem with the analogy is that air passengers know to watch for weapons, suspicious behavior, etc. In fact, passengers are the only ones who have actually caught any attempts at terrorism in the last 10 years, not the TSA. Passengers can still do something to detect and stop an attack

TSA's job is to prevent passengers from bringing weapons onto the airplane. They have some successes [nypost.com] and notable failures [judicialwatch.org] in doing this. Apple's job is to prevent malicious code from running on our iPhones and iPads and I'm sure they have some successes and failures.

What you're saying is that it's okay that the TSA might fail every now and again because the passengers will spot the malicious person and prevent him from performing his dastardly task. Of course, passengers [cnn.com] tend [huffingtonpost.com]

TSA's job is to prevent passengers from bringing weapons onto the airplane. They have some successes and notable failures in doing this.

No, the TSA's job is to stop terrorists from hijacking planes, not to keep guns off planes. If half the passengers had guns, the terrorists wouldn't try hijacking a plane. And that's the fundamental problem with the TSA, their focus is on passengers as threats, rather than on the threat to the passengers. That's like saying locks are to keep you from opening a door. No, the lock is to protect what's behind the door, the door and lock are just one mechanism of providing protection.

What you're saying is that it's okay that the TSA might fail every now and again because the passengers will spot the malicious person and prevent him from performing his dastardly task.

It's not more secure (Charlie Miller keeps demonstrating that), but for the typical user (who doesn't know enough about security to judge an app), having a vetting/approval process such Apple's is still offers a safer environment than running completely unvetted apps (such as on the Android stores).

Actually it's less safe.

Users in the "walled garden" have a false sense of security, the security is breached and the users still unquestioningly trust everything from a now untrustworthy source.

Apple has a vetting process that doesn't work. How is that different to an unvetted source?

So essentially, with Android you have unvetted applications, with Apple you have unvetted applications and a user base which is actively ignorant of security issues. Despite the rumours to the contrary, there has been

So essentially, with Android you have unvetted applications, with Apple you have unvetted applications

Except that Apple do do vetting, and thus do have vetted apps.

You claim it doesn't work. The lesson of 4 years of the Apple App store is that it does work.

Despite the rumours to the contrary, there has been no great Android outbreak precisely because Android users are aware of their own security.

The average Android user is not like you. The average Android user is the average phone user. They're not geeks. They don't understand security. They are exactly the same people that load animated cursors, smily packages and screensavers on their Windows PCs.

First, clearly you didn't read my reply [slashdot.org] to the previous commenter who used the "false sense of security" fallacy. Actually, the "false sense of security" argument can be many fallacies, linked below:

Appeal to belief [nizkor.org]. e.g. Many people claim it gives a false sense of security, therefore, it must. Show that it actually has that effect before you use it as your premise. A hypothetical premise only gives a hypothetical result.

What has been broken here is not the code-signing apparatus per se but another part of the Apple security regimen; it appears this doesn't affect the need to have a valid initial certification to begin with. If the signing mechanism were defeated, that would conceivably allow anyone and his dog to upload and sell apps on the store without registering as a developer. But it isn't. So, in fact, the only people who could leverage this issue for nefarious purposes are people who are already working in the marke

I think your faith in iOS developers is a little misplaced. I'd just like to provide an app of value to my customers, but Apple has no process in place to vet who gets to submit an app. They just let any entity that pays the $100 submit an app. That's hardly a barrier to the evil miscreants of the world.

I agree that the article was not entirely clear on how code signing is broken. this approach seems to be the ability to sideload new code. That's the code that hasn't been signed and that code hasn't got

Apple's iPhones and iPads have remained malware-free thanks mostly to the company's puritanical attitude toward its App Store: Nothing even vaguely sinful gets in, and nothing from outside the App Store gets downloaded to an iOS gadget.

WTF? Are you serious? Games and apps download data external to the App Store all the time. e.g.: The myFish3D app downloads new 3D models for fish and ornaments from its home site, uselessiphonestuff.com.

The Doctor Pwn's the OSX, he keeps his license. The Doctor Pwn's the iOS via Safari, he keeps his license. The Doctor Pwn's Apple's walled garden, and they take his license.

He was grandstanding. He could have EASILY contacted Apple on the downlow; but Noooooo! He had to grandstand, thus alerting the rest of the planet to the exploit BEFORE Apple had a chance to close the vulnerability.

He got exactly what he deserved (except that Apple should sue him into oblivion, and have him prosecuted for unauthorized access to a computer system, too).

In other words, Miller should thank his lucky stars that a company with a bigger legal department than most U.S. States have, and a nearl

Sure, he may have notified them. But did he also tell them that he seeded the App Store with a trojan, which gives him remote access to exploit the flaw, and which is also available to all iOS users for download?

If Apple ignored him, he could have very simply exposed the flaw publicly to shame them. The moment that he decided to violate policies, subvert the vetting process and inject into the App Store an app exploiting the flaw--at that moment, he made his bed and now he must sleep in it.

Booting Charlie Miller out of the game is also a completely retarded move. Making it harder for him to find vulnerabilities doesn't mean they'll dissappear. It just increases the chance that they'll be found by someone else, and that means greater risk of the "discoverer" being a black hat who won't tell Apple about it, and just abuse it.

He was booted probably for subverting the vetting process by submitting an app exploiting the flaw publicly, where it could be downloaded by millions of people. The fact t

Yes it is actually. How do you implement an API that guarantees that you go through that API to get access to something. It doesn't matter if you build your lovely "you don't get permission to anything unless the gatekeeper agrees" system, if you can simply go "we'll I'm ignoring the gatekeeper and jumping through this hole in the wall". That's what a security flaw actually is;)

Yeah, this isn't a security issue, it's just something that is possible. It also violates the developer agreement. All this 'news' is doing is pushing Apple to be even more restrictive with their already barbwire enclosed garden...