Find A Vulnerability In Apple Software; Lose Your License As An Apple Developer

from the kill-the-messenger dept

It appears that Apple is the latest company to take a "kill the messenger" approach to security vulnerabilities. Hours after security researcher Charlie Miller found a huge vulnerability in iOS, which would allow malicious software to be installed on iOS devices, Apple responded by taking away his developer's license.

The obvious implication: don't search for security vulnerabilities in Apple products, and if you do find them, keep them to yourself.

First off, here's Miller explaining the security hole:

To be fair, Miller did get Apple to approve an app that he was using to demo the security flaw. However, kicking him out of its developer program is exactly the wrong response. Miller, clearly, was not looking to use the code maliciously -- just demoing a problem with their system. In other words, he was helping Apple become more secure, and they punished him for it. The message seems to be that Apple doesn't want you to help make their system more secure. Instead, they'd rather let the malicious hackers run wild. As Miller noted to Andy Greenberg at Forbes (the link above):

“I’m mad,” he says. “I report bugs to them all the time. Being part of the developer program helps me do that. They’re hurting themselves, and making my life harder.”

And, no, this is not a case where he went public first either. He told Apple about this particular bug back on October 14th. Either way, this seems like a really brain-dead move by Apple. It's only going to make Apple's systems less secure when it punishes the folks who tell it about security vulnerabilities.

False Blame

> The obvious implication: don't search for security
> vulnerabilities in Apple products, and if you do find
> them, keep them to yourself.

This is not at all the proper implication. Miller, the author, was not removed from the developer program for finding and reporting the bug. He was removed for knowingly creating and uploading an application to the App Store that *exploited* the bug! The application was sitting in the App Store for over a month.

Although unlikely, someone else could have downloaded the app, found the vulnerability and maliciously exploited it.

He broke the rules to which he agreed when signing up to the developer program. He was removed from the program as a result of that.

The true "obvious implication" to draw from this is: If you find a vulnerability report it through the appropriate means, create something that demonstrates the exploitation (if you so desire) but don't upload the app you know breaks the rules to the App Store.

Re: Re: False Blame

Yep, that's exactly what to draw from this.
Monkey, you need a dose of reality. Just because you can do something, doesn't mean it's a good idea to do it. Lots of other companies (Sun, Microsoft, etc) have already found this out. If you don't pay attention to the bug that's reported, and you don't give any feedback, security guys will force your hand by creating an app that leverages the flaw. They've been doing it for years. This is Apples first time through the security line, because this is the first time they have had enough users to make it profitable to hackers. We're about to find out how virus proof Apple isn't.

Re: Re: Re: False Blame

Your timeline of what happened is incorrect, Mike.

The author did not report the bug and wait. He didn't tell Apple he found an exploitation and has an app waiting to demonstrate it. He didn't upload the app to the App Store after waiting a week, month... or however long.

The reality is Miller found an exploit, created an app, and uploaded it to the App Store. After that, he made public that he had found an exploit and demonstrated it using the publicly available app (which had been there for over a month).

There was no bug reported. There was no opportunity to force the corporate hand. What you don't do is exploit that bug first in the working sandbox first.

Did we not read the last paragraph in my original post? The part that points out that one *SHOULD* report bugs and create something that demonstrates the exploitations?

Re: Re: Re: Re: False Blame

"And, no, this is not a case where he went public first either. He told Apple about this particular bug back on October 14th."

I think this points out how wrong you are. He told Apple first, then he went public with it. It seems reasonable to me. They don't listen otherwise. Tell them first and then make them take action by telling everyone else. Unfortunately, Apple acted foolishly in response. I think Apple doesn't want people to know that their products aren't as secure as they claim they are. Lack of confidence in the brand can be costly.

Re: Re: Re: Re: Re: False Blame

Re: Re: Re: Re: Re: Re: False Blame

Yes, and he didn't go public with this information until Apple was informed. How was he going to test and demo the vulnerability without replicating it? The only way to prove the vulnerability is to actually inject the malicious app into the app store. You're just trying to invent a way to exonerate Apple and blame Miller.

Re: False Blame

"...Someone else could have downloaded the app, found the vulnerability and maliciously exploited it."

Are you suggesting that a miscreant could have discovered a vulnerability by downloading an app with very simple behavior, decompiling it, then sifting through the code looking for Easter eggs? Anyone with the skill to do that could have found the vulnerability far more easily by studying the code-signing protocol, the way Miller did.

Re: Re: False Blame

Nice use of ellipsis -- it's like a movie quote! Looking beyond the omission of "although unlikely" in the original sentence, your analysis of the level of effort to discover such an item is over-complicated.

Understanding the nature of what the vulnerability was (in hindsight) illustrates that it could have been discovered by very simple means. As unlikely as the an individual picking this app is.

The discussions over at Gizmodo on this story are at least somewhat insightful from a technical point of view, instead of people just running at it from a "big company keeping down the little guy" point of view.

Apple as a sandbox called the "App Store". They've made rules for playing in the sandbox; like "don't piss in the sandbox". If you piss in the sandbox, you're not allowed to play in the sandbox for a little while.

> Apple's actions may be defensible legally, but only
> legally.

You're stretching the notion of "legally" a bit there. We're taking a private company here. If Apple wanted to refuse an app because they didn't like the color scheme, they could. They don't have to have a reason to reject an app from the store -- they just can, because they want to.

This isn't a moral question. There are rules set up for developers who wish to participate in the App Store. Miller *BROKE* those rules!

Should Apple be working their butts off to fix this? Yes.

Should they have fixed it sooner? Dunno. Maybe they've been trying since it was discovered. Maybe they've been busy playing table tennis instead.

Could Apple have looked the other way? Could they have punished him a little less? Could they still reverse or revise the decision? Yes. Yes. Yes.

Do they need to? No. A developer knowingly introduced an app into the store that exploited a security flaw, and was punished according to the agreements he signed.

Re: Re: Re: False Blame

"Understanding the nature of what the vulnerability was (in hindsight) illustrates that it could have been discovered by very simple means. As unlikely as the an individual picking this app is."

If I'm parsing this correctly, you're saying that once the app was in the store, the vulnerability was as likely to be discovered by someone picking the app (from all the apps in the store), decompiling it, analyzing the results and discovering the exploit, as by studying iOS. That is absurd. I have lost count of the times I've heard of an independent researcher discovering a hole in a large, supposedly secure IT system; I have never heard of someone discovering a hole by deconstructing an app or other published software which shows no sign of malicious behavior.

(As for the rest of your argument, I really can't make any sense of it. You seem to be agreeing with me, then claiming victory.)

Re: Re: Re: Re: False Blame

Your analysis of the statement is incorrect. I am saying that decompiling the app is not at all a necessary step, because it is not. But the likelihood of someone finding that particular app and discovering the exploit is irrelevant to the argument.

> (As for the rest of your argument, I really can't make any
> sense of it. You seem to be agreeing with me, then claiming
> victory.)

If you are referring to the "actions may be defensible legally" aspect of the conversation, then yes - I am agreeing with you. I hadn't claimed (or meant to) anything to the contrary.

The simple fact is Miller broke his agreement that he signed with Apple when he uploaded a compromised application to the app store. As a result, his license was suspended per that agreement.

Others have pointed this out from slightly different angles...

See John Fenderson's post in this thread, or the "Two things" or "He broke the App store agreement..." threads below.

Re: Re: False Blame

> Without doing so, it may be impossible to confirm and
> demonstrate the bug.

In part, I agree. To expand on something you mentioned earlier in your post...

> The ultimate demonstration is having a signed, trusted
> app do something untrusted.

I would expand on that and add that getting such an app into the App Store (i.e., getting it approved by Apple itself) is the ultimate demonstration. You may have been implying this already by "signed, trusted app".

To show that a signed app, running in the iOS sandbox, could do something untrusted does not require its presence in the App Store. You can demonstrate this on a development platform.

It becomes tricky when you add "trusted" into it -- as you point out. At what point is it "trusted"? When Apple approves it for the App Store?

I'll say "yes" to the above. The app has passed the final review, so it can't be any more trusted then that.

The bump in the road here is that the exploitation was not previously disclosed. First step is to reveal the vulnerability and give the appropriate company (Apple, in this case) time to fix it (how much time is a different discussion).

If the exploitation had been disclosed, it would have provided Apple the opportunity to (1) fix it, or at least (2) watch for it and deny that "trusted" status so it never made it to the App Store.

But, okay. People don't agree with that sentiment. We'll switch up the argument -- creating the app and getting it onto the App Store to demonstrate the full vulnerability was the right course.

Miller should have, or could have: (1) pointed out the issue to Apple privately, (2) publicly exposed the issue and removed the app, or (3) discretely waited until the security conference until exposing it. Unfortunately, he made it public on Twitter well in advance.

I *don't* believe he meant this to be malicious in any way. But he isn't just a "messenger".

Re: Re: False Blame

I agree, but I think my response is more nuanced: they're both right.

Miller did the right thing in his actions. They were for the greater good. But the right thing he did was to fall on his sword, because Apple also did the right thing in terminating his developer agreement. Miller deliberately violated a very important term in the contract, and to ignore that fact would also make Apple look bad by, in a sense, playing favorites.

Re: False Blame

This is where it's tricky. He was exploiting an issue with the app approval process, which I would assume is just some sort of automated test suite that Apple runs application requests through. The only way to test the exploit is to have an app successfully go through the approval process and post to the app store.

Re: Re: False Blame

just some sort of automated test suite that Apple runs application requests through

Nope, every application is reviewed by humans. Remarkably stupid humans based on the number and types of rejections.

I tried to put a simple app in the store and it went through 5 rejections. The best one was "Use of a folder icon." The design guide says you shouldn't make any UI which implies the existence of a directory structure ... because if Apple users ever accidentally learn how computers work they won't be Apple users anymore.

Disappointed

I'm a huge Apple fan and have looked past some of the weirder things done. But I agree, this is going to make them less secure. Who want's to risk their license to promote a platform with apps if they risk losing their hard work. I'd like to see Apple reinstate this guy's license (or change the license such that there is a documented process for exposing security bugs - which might exist and I'm just unfamiliar with).

Anyway...

I do want to ask the security community a question, though. How fast does a large corporation have to respond to a security issue before going public? Is one month a realistic timeframe when there are development cycles that can't always to respond to the countless number of submissions by the community? Google, Microsoft, Apple and others aren't immune to the volume of requests that come in on any given day.

I'm not sure one month is enough and while it's good that the issue was discussed in publich without detailing the flaw exactly, do people have to be so trigger happy with disclosure when, as a security analysist, he's said it's a pretty obscure bug. Dangerous... but very obscure.

What if a three strikes rule (I almost shudder at saying such a thing in these forums when its such a hot topic in other circles) is a way to compromise on disclosure? Can security researchers agree that if after a third time following up with a vendor on software defects - with one month span in between emails - they go public?

It just seems to have disjoined responses by different security researchers on these types of bugs. Otherwise, Apple et al are left putting out fires in their development cycle and disrupting change on the threat someone doesn't think they move fast enough (which, again, could be different from person to person).

Re: Disappointed

Responsible bug disclosure has been a very active discussion for decades. There are *tons* of procedures that have been outlined and proposed and followed by grey hats for a long time on how to responsibly disclose security matters. They all take slightly different approaches to balancing the interest of the maintainer and the community.

Any procedure generally held in respect by the community usually has explicit protocol to follow if the maintainer in question fails to address the vulnerability. For example, a tiered approach, where in the first month the vulnerability is disclosed to the company, in the second month it is disclosed to security researchers of at least 3 respected security firms, and in the third month is disclosed to the entire internet, including the black hat community.

Apple probably feels it acted in accordance with its policy regarding the app store, but I don't think they're being very smart about the matter. They're drawing publicity to a security vulnerability they haven't fixed yet.

Re: Disappointed

Most companies, like Microsoft and Google, have a very specific protocol and will work with the hacker on a timeline for releasing a fix. Often the hacker agrees to wait until the fix can be implemented but there are a few exceptions which typically include: when the company decides on a timeline that is far too long or inadequate (i.e. the next version of Windows), when the hacker finds that someone is already exploiting the bug, or when the company refuses to respond.

I'm guessing Apple shrugged off his concerns and didn't give him any kind of response (let alone a timeline for a fix) which pretty much forces your hand. While announcing these to the public is temporarily dangerous, failure to inform people is far worse.

Re: Disappointed

I work for a very major company that produces security software, and handle software faults that themselves present security implications. I don't know how other companies handle it, but I can tell you how mine does:

Someone (99% of the time a customer) reports a security vulnerability. This person is responded to immediately the problem is handled with them the same way any other software fault is handled: an ongoing dialog while our internal process continues.

This does not necessarily mean the fault is solved immediately -- maybe not even within a month. It depends on the fault. Some things (even things that seem easy) require a tremendous effort to resolve.

If the reporter were to publicly disclose the problem prior to our resolution, it would not force our hand or make things get fixed faster. Security problems are already given maximum attention. All it would do would be to increase the risk to other customers until the problem is resolved.

Not all companies take such issues so seriously, of course, and the threat of public disclosure may spur them into action. We try to reassure the reporter that things are being taken seriously through dialogue in part to prevent premature disclosure.

Re: Disappointed

If a company provides a 'faulty' product to their customers, how long should they have to 'fix' the situation and remedy the 'faulty' product before someone else comes along and exploits the flaw causing untold havoc to their customers?

Are you seriously saying that companies should have 3 months to fix 'zero-day' exploits, and that the customers 'exploited' during those 3 months should have no recourse?

If you go to the store and buy a product, get home and it doesn't work, would you allow the store 3 months, with 1 month between each e-mail before they provide you with a working product?

Re: This is exactly how Microsoft once responded

The only policy that works is immediate full public disclosure. Microsoft, Apple, et.al. have BILLIONS of dollars and should be paying highly qualified senior engineers to be on standby 24x7 for just such events -- and therefore should not whine about being blindsided. (Want fewer disclosures? Fix your damn code.)

There's no reason at all to give these companies the benefit of free professional consulting. They've conclusively proven that they will (a) ignore (b) deny (c) threaten (d) punish -- so why bother with them? Just publish on the full-disclosure list or elsewhere, and let them reap the whirlwind.

Re: Re: Re: This is exactly how Microsoft once responded

but the fallout from the harm you cause others makes you the villan.

That's obvious nonsense, of course. Merely reporting facts accurately makes NOBODY the villain. Clearly, those to blame for any adverse consequences are (a) those who take malicious action based on the facts and (b) those whose incompetence, laziness, and stupidity are directly responsible for the situation.

Can you cite incidents where Apple has ignored, denied, threatened or punished someone for reporting vulnerabilities through the proper channels?

If you can't, then you're clearly far too uninformed to be worthy of participation in this discussion. Please run along and provide yourself with the appropriate remedial education before continuing.

Re: Re: This is exactly how Microsoft once responded

> The only policy that works is immediate full public disclosure.

That is an irresponsible policy.

A more responsible one is to disclose to the vendor and then disclose publicly after a short window for the vendor to fix it. It is not a matter of free consulting. (In fact, I like that Google has paid bounties to people who find these kinds of things.) It is a matter of social responsibility.

If you disclose to the public with no prior notice to the vendor, then how is this getting you any monetary gain? It's not. So I don't see your argument about "free consulting". Yet immediate disclosure is clearly less socially responsible because it gives the bad guys an opportunity to exploit it.

As you say, the vendor has millions of dollars. A short window to fix it should be plenty -- regardless of the severity of the problem -- because the vendor has vast resources to get it fixed quickly. Assuming the vendor feels a responsibility to fix it at all.

Whether or not the vendor fixes it in a short time, you should publish your findings.

Re: Re: Re: Re: Re: This is exactly how Microsoft once responded

Since it's not public, the only ones exploiting it are those who have found it in a first person basis.

Wrong.

There are some very healthy marketplaces where vulnerabilities are bought and sold and traded. They're not public -- well, not very public. But they exist, and it's quite routine for security holes and sample exploit code to be sold -- either once, to the highest bidder, or repeatedly, to anyone with sufficient funds.

And because these known marketplaces exist, we must posit the existence of others that may be entirely unknown except to the very few who frequent them. These no doubt cater to exclusive clients who have demonstrated a willingness to pay handsomely.

Of course any of the buyers in either kind of marketplace could then re-sell what they have -- or use it themselves. Or both.

Re: Re: Re: This is exactly how Microsoft once responded

You are making two very foolish assumptions.

First, you're presuming that the bad guys don't already have it. In all likelihood, they do. And it's further likely that they've had it for a while. Keep in mind that the bad guys have long since demonstrated superior intelligence, ingenuity and persistence when compared to the software engineers at places like Microsoft and Oracle and Apple. Their livelihood depends on it. Given this, failure to disclose publicly immediately provides considerable aid to the bad guys, who can continue exploiting security holes against victims who don't even know they're victims and thus are unlikely to take steps to defend themselves.

Second, you're presuming that "disclosing to the vendor" is not the same as "disclosing to the bad guys". But of course in many cases it is, due to pitifully poor vendor security -- or the simple expedient of having a well-paid informant on the inside. (If I were a blackhat, I'd certainly cultivate contacts at major software vendors. I'm sure they have staff that are underpaid and/or disgruntled, and thus readily susceptible to bribes. Or blackmail.) So if they didn't already know...they'll know now. And once again, victims will be the only ones left in the dark.

The only way to level the playing field is to fully disclose everything immediately and publicly. It's unlikely to be news to the more talented bad guys and it will at least inform victims and potential victims what they're up against. Not doing so is a major win for blackhats.

Re: Re: Re: This is exactly how Microsoft once responded

I've written code for decades. If you run any modern BSD or Linux variant, you're running my code. (It might still be in Solaris, as well.)

No, I'm not a full-time programmer, although I have been. What I am is paranoid, careful, meticulous. exacting, and thorough. And I expect no less of others -- but clearly, the overwhelming majority of people who program are thoroughly incompetent.

Re: Re: Re: Re: This is exactly how Microsoft once responded

Dude, you're tilting at windmills; the idea of "do it right or don't do it" is so fucking outmoded it's sickening -- but you have my sympathy.

(By which, I mean that it's sickening that such an idea became outmoded, because in practice it saves enormous amounts of money and immediately and permanently negates the present dilution of the programmer's trade.)

Re: Re: Re: Re: This is exactly how Microsoft once responded

What I am is paranoid, careful, meticulous. exacting, and thorough.

Which still doesn't guarantee bug free code.

Let's not get confused, I'm not advocating that customers should be used as your first form of testing, far from it. The code should be reviewed, unit tested, regression tested, black box tested, and then go through multiple beta phases.

But the flippant notion that you can write 100% bug free code the first time every time is asinine and only a person with an astounding lack of intelligence would make such an assertion.

Re: Re: Re: Re: This is exactly how Microsoft once responded

Really? You write 100% bug free code the first time every time.

I'll take that bet, I'll pay you $100,000,000 if you've written code for more than 2 years and managed to produce a product as complicated as an operating system which has zero bugs ... yeah, that's what I thought.

Re:

An un-exploited vulnerability hurts no one as long as it remains un-exploited by the bad guys.

Giving the vendor a chance to fix it is likely to prevent harm. (IF the vendor fixes it, and IF no bad guys find it in the meantime.)

Your analogy is wrong. Water poisoning is like the bad guy exploiting the vulnerability. That is the actual poisoning. The vulnerability is a way in which poison can be introduced. The knowledge of the vulnerability should be disclosed to those who can fix the vulnerability and prevent poison from being introduced into the water system.

Re: Re:

An un-exploited vulnerability hurts no one as long as it remains un-exploited by the bad guys.

This is seriously wrong.

First, it presumes a nonsensical situation: that a vulnerability will remain unexploited. As everyone with any security clue at all knows, that's not going to happen. Now...it may be exploited poorly, or ineffectively, but it is absolutely inevitable it will be exploited.

Second, existence of vulnerability X is often indicative of vulnerabilities Y and Z. And vice-versa. In other words, it is foolish to presume that a vulnerability known to the vendor is the ONLY one. It's equally foolish to presume that a vulnerability known to the bad guys is the ONLY one. Bad code usually doesn't have just one bug.

Third, vulnerabilities are there for anyone with sufficient knowledge and determination to find. But they're also there for anyone with sufficient funds (to purchase) or guile (to steal). (Yes, I know that "steal" isn't quite the right verb, but a better one doesn't suggest itself immediately.) It is one of the massive, arrogant conceits of those who oppose immediate full public disclosure that they actually believe these things can be kept secret.

They can't.

They won't.

The biggest beneficiaries of any attempt to conceal any part of this process are the very people it purports to defend against. Secrecy serves their purposes well.

Regardless of his intentions his actions violated the terms of the agreement. As a developer on this platform I know the terms of the agreement and they clearly indicate that you cannot publish code that runs other code. Also, another term of the agreement is that the app cannot hide its intended purpose, in other words you can't release a trojan horse app. He should have informed Apple that the vulnerability was there and been done with it. Instead he published the app, violating the terms of the agreement. He got what he deserved.

Re: Re:

The flagrant disregard of mutual agreements has started more than one war. There is an open channel of communication to Apple to report vulnerabilities, he wanted to showboat by creating a video so he published his backdoor app to their store. Who is to say that he wouldn't do it again? The vulnerability will be corrected, he should have reported it to Apple without publishing the app.

Re: Re: Re: Re:

> Shoot the messenger.

Unfortunately, he wasn't a messenger.

Miller did not inform Apple of the bug before hand. He did not provide an opportunity to acknowledge and fix the bug.

He knowingly uploaded an app that violated the terms and introduced a vulnerability. He then, other a month later, made a public announcement that the bug existed and the app to prove it is already in the App Store.

Re: Re: Re: Re: Re: Re:

True. After he had already created and gotten an app approved into the App Store. So the vulnerability was public.

> Can you cite any evidence to show when the app was
> uploaded and when Apple was notified?

From the original Forbes article:
"Miller had, admittedly, created a proof-of-concept application to demonstrate his security exploit, and even gotten Apple to approve it for distribution in Apple’s App Store by hiding it inside a fake stock ticker program..."

> Also, how can you prove that a vuln exists without a
> proof-of-concept exploit? Is there any way to confirm the
> vuln without uploading something to the app store?

It depends on what you claim the ultimate vulnerability is. This is being discussed in greater detail in other threads there people are ripping me apart.

There are two vulnerabilities here:
1) The ability to actually run untrusted code
2) The ability to get an app approved in the App Store that exploits that.

In answer to your question for those two points:
1) Yes. This can be shown outside the App Store.
2) No, obviously not, since getting the app approved is the end-goal.

Had he made #1 public first then demonstrated #2 after giving Apple a chance to respond this would absolutely be all about Apple covering something up.

Re: Re: Re: Re: Re: Re: Re:

He could have gotten approval, but he did not have to publish the app and make it available to the public. Even if he did publish he could have pulled it immediately after. Why leave it up? Maybe he thought it would be good PR when he went public.

He made malicious code available to the public. This puts it in the hands of others who would use it for evil. And now they know to look for it. Hopefully the exploit is patched before that happens.

Anyone know if he tested his proof of concept by running code on someones device? That would be a serious issue whether the code was malicious or not.

Re: Re: Re:

I realize this is a sidebar.

You said: "The flagrant disregard of mutual agreements has started more than one war."

It is true that this is often seen as the flash point. But really, it is often the one sided nature of a 'negotiating process' that sets up the later conflict. A signed contract from the weaker party makes it a lot easier to invade. Even better if the contract that was signed had impossible to meet terms.

Re: Re:

This blind reading of contract language without any attempt to look at motivation or spirit of the law is so disheartening.

I've been increasingly disturbed by the courts desire to hold contract law above all other laws. There are still some rights you cannot contract away but they seem to be diminishing. I understand why contracts have gotten where they are today, I just don't understand how this is sustainable. Most individuals are party to dozens if not hundreds of contracts at any given time and it is ridiculous to think that people can truly understand and agree to that many contracts (home (probably 5 - 10 contracts alone), phone, cable, car, software, etc.)

Re: I have said this crap before...

Yet another example of Apple's poor security model

This just further publicizes what security researchers have known for years. The PR projection that Apple products are inherently more secure is a farce. They are just as insecure as anything else out there, but much more dangerous because their loyal fan base is absolutely convinced that Apple products have superior security.

Are you all really that sure of a reported timeline?

So, how do any of you actually know what the timeline was? Really? Who do you trust from an informational standpoint? Is one source really more credible than another? Who's server logs do you trust to NOT have been tampered with? Who's email "timestamp" do you really believe?

Does it really matter regardless?

The basic fact that Miller was able to get an application approved and in the store that leveraged a very simple, and in hindsight, obvious hack in iOS is what should actually be setting off the alarms for everyone. I see this as the smallest tip of the iceberg starting to show above the waters for Apple. From a practical standpoint they'd be better off slapping Millers' wrist publicly then thanking him with an incentive behind the scenes. Perhaps even adding an associated non-disclosure and prioritizing any bug notices he sends them from now on. Technically what he did was in fact against the ToS but the point is that ill intentions are NOT going to be blocked by a ToS! Apple really needs to step up their own security and responses or things are going to get ugly for them really fast.

Two things

In Apples defence, they can't take the risk that he may have (though unlikely) been exploiting the security hole he exposed. His point that no one would take problem seriously unless there was an app in the app store is valid, but from the companies perspective, the first and fastest way to prevent more users from being infected by his app is to cut off the source. Hence, they killed his developer account so he can't upload anymore apps that may flaunt this security hole.

If Apple did not kill his account and allowed him to upload more apps that did exploit other security holes to harass iPhone users, Apple could be liable because they knew he was someone who had published a malicious app in the past and took no action to prevent it in the future.

Perhaps they could have just flagged the account to make all apps submitted by him are rejected until the flaw is fixed, but I doubt their system is that sophisticated.

If Apple stops there and does nothing else, shame on them. However, I suspect they will patch this in an update soon.

Re: He broke the App store agreement...

I know right? These TD fanboys and anti-apple fanboys need to learn the definition of contract. Sheesh. Blame Apple all you want, the guy messed up. Badly.

Sure their response might not have been appropriate, but it's not like the guy's was... "Ignore me and I'l post hacks disguised as apps, then complain when I get caught". Awesome. And you guys all bit and aren't letting go.

Something similar to this happened a few years ago. Two people found a MAJOR vulnerability in MAC OS, told apple about it, then two months later went public with it. In response, if I remember correctly, Apple sued them, released a press release that said there were no issues, then 2-3 days later released a mandatory 130 MB security update for their system.

psssh

Next time just go public on day 1 with massive security holes, sit back and laugh as apple goes through its Sony Phase and gets its systems utterly violated by a wide variety of interested parties......

Given Apple's history of handling issues with their systems this does not surprise me. There are tons of charges being made to peoples iTunes accounts that they did not make. Many often point to 1 specific program, Apple refuses comment to the people who have their money taken. Apple refuses to respond to the large number of annoyed customers, and when they do it is condescending claiming that nothing is wrong with iTunes. Some people get their money back, some get shafted. If it was a few accounts I would blame the user, but when large numbers of your users are being hit with identical issues over and over... there is a flaw in your system.

Apple is like the Pope if you think about it sideways.
Apple tells you everything Apple is magical and just works, to admit a flaw is anathema to them.
The Pope is the word of God, everything he says is true and just so. It is impossible for the Pope to admit a mistake because of Papal infallibility.

Apple is so focused and concerned about their image, that they are alienating consumers. The sad thing is these consumers stick with Apple, even when they are forced to bear the burden of Apples failure.

If you shoot the good guy who comes and tells you your flawed, do not then cry when they stop showing up. Do not complain when your myopic staff can not see the flaws and then less than good people will exploit for profit.

What really needs to happen is there needs to be a system put in place where exploits can be submitted, verified, send to the affected company with a deadline of fixed or not this goes public in X days. They can hate on the faceless group then, rather than target the person who found the bug.
Some flaws should not be suppressed, and it is far cheaper and easier for a corporation to launch lawsuits to crush the finder of the bug, than to fix the flaw. Then when someone else finds and exploits that flaw they can point at the original reporter to shift the blame to, didn't warn us, didn't give us enough time, released it to hurt us...

So what exactly is the benefit to looking for flaws if your not out to exploit them? You get treated like crap, sued, and painted as evil. Are we surprised some people just skip to being evil? How many times can you kick them before they kick back?

Apple Negatively Attacks A Developer

I am a pioneer software maker from the 1980s. One of my biggest problems with developing programs was Apple. They tried to control what I made for their machine. Never had a problem with any other company and their computer. I saw my first cease and desist letter from Apple. I told them that they were not the software police and I would fight them in court. Well, years later, things haven't changed. However, now Apple's huge size may finally make them the software police or at least a threatening bully!