Apple aware of email attachment encryption issue in iOS 7.1.1

Recent releases of Apple's iOS platform, including the latest iOS 7.1.1 update, include a bug that prevents email attachments saved on the device from being properly protected with encryption, and a fix is presumably on the way.

Security researcher Andreas Kurtz revealed last month that he has reported the flaw to Apple, and the company responded by saying they were aware of the issue. To date, the problem has not yet been fixed, and Apple has not offered a timetable for when it might be addressed.

Apple's statement on the issue simply said, "We're aware of the issue and are working on a fix which we will deliver in a future software update."

Using an iPhone 4, Kurtz was able to verify that the attachments could be read without any encryption or restriction after accessing the device's file system in both iOS 7.1 and iOS 7.1.1. The same vulnerability was then discovered on an iPhone 5s as well as an iPad 2 running iOS 7.0.4. The flaw was highlighted last week by ZDNet.

Apple advertises data protection on its iOS platform for devices that offer hardware encryption, which includes the iPhone 3GS and later, as well as all iPad models. Data encryption can be enabled by turning on a passcode lock on an iOS device.

Exploiting flaw requires physical access or iPhone 4-only jailbreak

Of course, this security flaw requires that a malicious hacker have physical access to the iPhone in order to read the root file system. Accessing the unencrypted attachments requires the device to be placed in "DFU" mode and accessed via SSH. That step requires that a malicious user would either need the device passcode or perform a hardware jailbreak of the device to take exploit the bug.Apple's latest iOS 7.1.1 release is currently only possible to jailbreak on iPhone 4

Apple's latest iOS 7.1.1 release is currently only possible to jailbreak on iPhone 4, according to International Business Times, which notes that "owners of newer iOS devices running iOS 7.1 and above continue to be without luck as no jailbreak has been developed for the latest version of iOS on devices such as the iPhone 5S and iPad Air."

Earlier this month, a separate SSL security flaw was discovered in both iOS and OS X. Apple worked to quickly patch the issue in subsequent updates to both platforms.

This is not very troublesome. Encrypted in transit via ssl. Not on the disk. That's true of OS X too except the file system is easier to navigate. And the encryption on disk wouldn't even help if the hacker had physical access and the password - he'd just have to open mail.

This is not very troublesome. Encrypted in transit via ssl. Not on the disk. That's true of OS X too except the file system is easier to navigate. And the encryption on disk wouldn't even help if the hacker had physical access and the password - he'd just have to open mail.

It reads to me that if the device is locked the can still gain access to the root file to see the attachments in their unencrypted form. That means that a stolen iPhone isn't as secure as I thought ti was as I thought all my personal data was encrypted on disk.

This bot has been removed from circulation due to a malfunctioning morality chip.

This is not very troublesome. Encrypted in transit via ssl. Not on the disk. That's true of OS X too except the file system is easier to navigate. And the encryption on disk wouldn't even help if the hacker had physical access and the password - he'd just have to open mail.

It reads to me that if the device is locked the can still gain access to the root file to see the attachments in their unencrypted form. That means that a stolen iPhone isn't as secure as I thought ti was as I thought all my personal data was encrypted on disk.

That's what you are supposed to think and what Apple wants you to believe. ANYONE who invests blind trust in a device/software manufacturer with a closed proprietory OS is really to naive to be rescued. Apple, like all of the rest are interested in profit and not in security or privacy. Get your head around that.

It's more serious than you may think at first glance. For one thing, if you THINK a system (any system) is secure, you are wrong in most cases. For another, if you ASSUME your system is NOT 100% secure then you will go into a risk-management mindset and alter your behaviour according to the sensitivity of the data concerned. FFS, its not rocket science.

The war on hackers is no different than the war on drugs. You find one tunnel and bury it and they build new ones or start using boats and planes to cross the border. Apple does a decent job with security even if they do tend to take their sweet time sometimes with patches. All any company can ever do is to try and make it as difficult as possible for hackers and be far more difficult to hack than the competition. Like the old expression goes you don't need to outrun a charging bear in the woods just outrun your friend.

ANYONE who invests blind trust in a device/software manufacturer with a closed proprietory OS is really to naive to be rescued.

It's not blind trust, it's assessment of relative risk. Open source software has more than its own share of security issues and other bugs. Just because it's "open" doesn't mean any eyes--let alone educated, discerning eyes--are looking at it. Furthermore, a closed-source provider like Apple has a greater interest in fixing problems sooner. This is how Android has become a security and support nightmare, and iOS is the more secure and stable platform.

Apple, like all of the rest are interested in profit and not in security or privacy. Get your head around that.

I can't get my head around why you think these are mutually exclusive.

OK, let's think about it a moment. Don't worry, you're not under attack.

Firstly, it's a self-evident truth that you can't have privacy without security. If you can't protect your data, you can't protect your personal data. OK so far ?

Now let's focus for the moment on Apple and Security. While I think it's fair to say that the Unix foundation of OSX is a great advantage since it incorporates a considerable degree of security by design. it is equally fair to say that Apple has been anything but open and transparent about the details of its implementation. Frankly, you just don't know enough to say it is cutting edge security. The goto fail blunder however speaks volumes about the inadequacy of their code reviews and security testing. Frankly that was a fail of monumental proportions because it was so f*n obvious to anyone who can read code. There was NOTHING subtle about it. There are other examples.

Let's turn to the Hype about the highly secure touch-id implementation. It was cracked in a couple of days by simple procedures, already well known and documented before Apple released the product. Now I take the position that if a company knowingly spouts bullshit regarding security, then they are not to be trusted. And remember .. its not open source so the real experts cannot independently review the security programming.

Finally, there is a long-standing and unchanged lack of transparency and responsiveness by Apple with regard to responding to the user community on security issues turned up regularly by third party analysts. By comparison, Microsoft has completely transformed its policies in this respect (no I do NOT like MS) and is much more open, faster to respond and to patch.

Turning to privacy. I need to point out perhaps that I am a full time European Privacy Officer in a multinational company. More I will not say to my own professional background. In Germany Apple was forced to change its privacy policy statement in no less than 15 specific respects because of noncompliance with the law in this country.

Secondly, if you actually READ the Apple Privacy policies you will not that they define Personal Information only as information containing DIRECT personal identifiers. This is absolutely incompatible with at least 40 years of internationally accepted definitions which since around 1950 (!!) have defined personal information as any information directly or indirectly linked or linkable to natural (living) persons.

Why is this a real issue ? Because Apple reserves the right without limitation to use information that by APPLE's definition is not personal information, in any way they see fit. So basically the Apple privacy policy is defined to protect APPLE in the US against class-action lawsuits. If APPLE was in any way serious it would change its definition to include indirectly identifiable information. For over a generation and a half it has been increasingly simple with increasingly sophisticated statistical and other technology to create indirect links that completely expose and eliminate the privacy of the affected persons. Google specializes in exactly this area (among others). The NSA and other 3-letter US Agencies also specialize in this technology. It really is time to stop being naive about privacy vs profit orientation. When it comes to a buck to be made, Apple will not hesitate to trick and deceive the users into a false sense of security. And for the record, apple is by no means the only offender in this respect.

Finally, to save any stupid debate, in the entire EU the right to privacy is a constitutional right. Your personal information belongs to YOU and not to any company that may want to collect it. Its important for americans to understand that. In germany it was ruled by the constitutional court in 1988 that there is a fundamental right to informational self-determination.

OK, let's think about it a moment. Don't worry, you're not under attack.

[…]

Does that help you get your head around the issues ??

All a see is a bunch of BS trying to defend your stance that profit and security are mutually exclusive, and that Apple doesn't do anything to protect users because it doesn't care to despite overwhelming evidence to the contrary.Edited by SolipsismX - 5/5/14 at 12:04pm

This bot has been removed from circulation due to a malfunctioning morality chip.

Now let's focus for the moment on Apple and Security. While I think it's fair to say that the Unix foundation of OSX is a great advantage since it incorporates a considerable degree of security by design. it is equally fair to say that Apple has been anything but open and transparent about the details of its implementation. Frankly, you just don't know enough to say it is cutting edge security. The goto fail blunder however speaks volumes about the inadequacy of their code reviews and security testing. Frankly that was a fail of monumental proportions because it was so f*n obvious to anyone who can read code. There was NOTHING subtle about it. There are other examples.

<snip>

It's a bug (like goto fail or the OpenSSL problem). I think it's a bit unfair to assume that Apple does not care about security just because it has a bug. Everybody has bugs.

I also think it's a bit of a stretch to blame this on Apple's OS being proprietary. It's unfair to say that they're not transparent on security: the strategy is here; not revealing a security bug until a fix is ready is standard industry practice (don't give the bad guys a head start, they'll reverse engineer the fix anyway but that's better than a zero-day attack). It's a stretch to say this is down to iOS being proprietary since being open-source is no better guarantee of error-free security code (obvious example being Heartbleed, currently plaguing the web because a piece of open-source code was inadequately reviewed about 2 years ago and many servers have undetectably been leaking data ever since).

I would also echo SolipsismX and asdasd in saying that Apple's profit motive is precisely consistent with providing good security. iOS has a sophisticated security architecture that is a part of the iOS devices being high quality, which underpins Apple's ability to charge a premium price and hence make more profit. The fact that this is often overlooked (by a lot of people, some that should know better) does not affect the fact that the design intent is to secure all data at rest (ie stuff in the Flash memory) just like seriously secure systems do and to do it transparently and efficiently by using hardware AES encryption (in all iOS devices since iPhone 3GS). That currently has a vulnerability for a particular category of files; when it's fixed, the document I referenced will again describe the actual behaviour.

There are downsides to both proprietary OSs and closed ecosystems. I don't think this is an example.

I like the way it says last month...they don't say which day he notified Apple. He only posted it to his blog last Friday. I'm sure 7.1.1 was already packaged up by the time Apple learned about from him. Since it looks like a very unlikely attack, my guess is that Apple determined that it is not dangerous enough to issue a zero-day like response. I'm sure it will be addressed in the next update. If someone were really concerned about this and their data was that sensitive, they could take temporary measures to move the attachments to a more secure location such as their personal computer, encrypt it and delete it from their phone. Once the issue is patched resume normal procedures.

In any case, Apple has nothing to gain financially from ignoring this security vulnerability, in fact precisely the opposite.

OK, let's think about it a moment. Don't worry, you're not under attack.

Firstly, it's a self-evident truth that you can't have privacy without security. If you can't protect your data, you can't protect your personal data. OK so far ?

Now let's focus for the moment on Apple and Security. While I think it's fair to say that the Unix foundation of OSX is a great advantage since it incorporates a considerable degree of security by design. it is equally fair to say that Apple has been anything but open and transparent about the details of its implementation. Frankly, you just don't know enough to say it is cutting edge security. The goto fail blunder however speaks volumes about the inadequacy of their code reviews and security testing. Frankly that was a fail of monumental proportions because it was so f*n obvious to anyone who can read code. There was NOTHING subtle about it. There are other examples.

Let's turn to the Hype about the highly secure touch-id implementation. It was cracked in a couple of days by simple procedures, already well known and documented before Apple released the product. Now I take the position that if a company knowingly spouts bullshit regarding security, then they are not to be trusted. And remember .. its not open source so the real experts cannot independently review the security programming.

Finally, there is a long-standing and unchanged lack of transparency and responsiveness by Apple with regard to responding to the user community on security issues turned up regularly by third party analysts. By comparison, Microsoft has completely transformed its policies in this respect (no I do NOT like MS) and is much more open, faster to respond and to patch.

Turning to privacy. I need to point out perhaps that I am a full time European Privacy Officer in a multinational company. More I will not say to my own professional background. In Germany Apple was forced to change its privacy policy statement in no less than 15 specific respects because of noncompliance with the law in this country.

Secondly, if you actually READ the Apple Privacy policies you will not that they define Personal Information only as information containing DIRECT personal identifiers. This is absolutely incompatible with at least 40 years of internationally accepted definitions which since around 1950 (!!) have defined personal information as any information directly or indirectly linked or linkable to natural (living) persons.

Why is this a real issue ? Because Apple reserves the right without limitation to use information that by APPLE's definition is not personal information, in any way they see fit. So basically the Apple privacy policy is defined to protect APPLE in the US against class-action lawsuits. If APPLE was in any way serious it would change its definition to include indirectly identifiable information. For over a generation and a half it has been increasingly simple with increasingly sophisticated statistical and other technology to create indirect links that completely expose and eliminate the privacy of the affected persons. Google specializes in exactly this area (among others). The NSA and other 3-letter US Agencies also specialize in this technology. It really is time to stop being naive about privacy vs profit orientation. When it comes to a buck to be made, Apple will not hesitate to trick and deceive the users into a false sense of security. And for the record, apple is by no means the only offender in this respect.

Finally, to save any stupid debate, in the entire EU the right to privacy is a constitutional right. Your personal information belongs to YOU and not to any company that may want to collect it. Its important for americans to understand that. In germany it was ruled by the constitutional court in 1988 that there is a fundamental right to informational self-determination.

Does that help you get your head around the issues ??

I can't be bothered to address most of your provocative-looking post. But I will the bit about Touch-ID.

I closely followed the story of the supposed cracking of Apple's fingerprint technology, and having examined the details, I would hardly call it 'cracking' at all. The conditions that the 'hackers' used were so artificial and unlikely to happen in real life, that one cannot realistically extrapolate their experiment to real-world use. Also, too many variables were not released to the public, so it's impossible to verify the accuracy. But from what we do know, I can say this: the fingerprints were carefully removed from a pristine phone. In real life, it's highly unlikely that you would have a perfect fingerprint on your phone-it would be covered in smudges and smears. You would also need to know exactly which finger or thumb you were lifting. Remember, you only have limited attempts before requiring the password. You also have limited time to enable the hack before requiring the password. Finally, the video that was made never actually showed the whole process of removing the fingerprint and using the artificial print to unlock the phone; all we saw are brief glimpses, all of which could easily have been faked, not to mention all the stages in between.

I closely followed the story of the supposed cracking of Apple's fingerprint technology, and having examined the details, I would hardly call it 'cracking' at all. The conditions that the 'hackers' used were so artificial and unlikely to happen in real life, that one cannot realistically extrapolate their experiment to real-world use. Also, too many variables were not released to the public, so it's impossible to verify the accuracy. But from what we do know, I can say this: the fingerprints were carefully removed from a pristine phone. In real life, it's highly unlikely that you would have a perfect fingerprint on your phone-it would be covered in smudges and smears. You would also need to know exactly which finger or thumb you were lifting. Remember, you only have limited attempts before requiring the password. You also have limited time to enable the hack before requiring the password. Finally, the video that was made never actually showed the whole process of removing the fingerprint and using the artificial print to unlock the phone; all we saw are brief glimpses, all of which could easily have been faked, not to mention all the stages in between.

There was a beginning to end video of the process you probably missed so I don't think it was faked. I agree with you that it's not a likely-to-occur scenario in real life either.

There was a beginning to end video of the process you probably missed so I don't think it was faked. I agree with you that it's not a likely-to-occur scenario in real life either.

That's an impressive result but consider all the steps and equipment they needed to do it. Based on the amount of time it takes and that most people still use a 4-digit PIN with no setting to wipe the device after 10 incorrect attempts I bet it would be faster to compile a list of most common PINs and PINs that apply to that person's device via simple social engineering. Even if those fail your odd of finding the right PIN is still high. I use a alphanumeric and have it set to wipe my device after 10 attempts.

It also took 3x before the fake fingerprint worked. If they hadn't put a very distinct, flat, full fingerprint on an otherwise clean iPhone display they may not have gotten a good scan which may have resulted in it taking more than 5 attempts which would mean a passcode would then be required.

They also knew which finger print was to be used. I suppose you could figure out what area of the device one's thumb might likely be placed but I can tell you mine is rarely ever (if ever) being pressed firmly and evenly down on the front glass. That said, if one really is concerned they can use their thumb knuckle or some other part of the hand that is convenient yet not the distal phalanx's pad, which is what we usually touch our phones with.

Furthermore, the notion that Touch ID was designed to be Fort Knox is a misconception. It's "security for the rest of us," meaning that with the majority of smartphone users not using a PIN it's a great a solution that adds a tremendous amount of security without the inconvenience that almost always accompanies increased security. Touch ID is a rarity in the security world for this very reason.

PS: One thing I really want with this mythical iWatch is proximity-based security. I want my iPhone, iPad and Mac to lock if the proximity to the iWatch gets too far or if the BT connection is severed. I'd also like to be able to unlock my device, but that would also mean the iWatch would need to be intelligent enough to know when the device is removed to severe this connection and have some sort of authentication to verify the wearer should allow these to unlock their other devices. This would mean, for example, that if you're using (i.e.: unlocked) your iPhone on a subway and it gets stolen the device would lock pretty much soon after the thief takes off with it.

There is still the issue of a thief being able to turn it off so it can't be tracked but at least he won't have access to your personal data. The solution I'd like to see for that is requiring a passcode/Touch ID to shutdown the device. If you hold down the Sleep and Home Buttons for several seconds this would only cause the device to restart, not unlike how when your iPhone runs out of juice and it's plugged in for several minutes it will then reboot. This technique might require a special HW chip to tell the system to reboot but I think it's very doable.

This bot has been removed from circulation due to a malfunctioning morality chip.

OK, let's think about it a moment. Don't worry, you're not under attack.

[…]

Does that help you get your head around the issues ??

All a see is a bunch of BS trying to defend your stance that profit and security are mutually exclusive, and that Apple doesn't do anything to protect users because it doesn't care to despite overwhelming evidence to the contrary.

OK , now you're really just acting wilfully stupid. Read my comment again. Nowhere did I say that Profit and security are mutually exclusive, but if you want to call it that I would tend to agree simply because Security requires effort, manpower and investment, which means more expenses, which means more profit. So they are not mutually exclusive, but opposing elements.

Secondly, I never said that Apple doesn't do anything, as you claim. I did imply that it doesn't do enough.

But your tactic of erecting straw-man arguments and putting words into my mouth is as transparent as it is infantile.

So, for the intellectually challenged you might like to learn to read carefully, to discuss rationally and to stop being an asshole.

OK, let's think about it a moment. Don't worry, you're not under attack.

Firstly, it's a self-evident truth that you can't have privacy without security. If you can't protect your data, you can't protect your personal data. OK so far ?

Now let's focus for the moment on Apple and Security. While I think it's fair to say that the Unix foundation of OSX is a great advantage since it incorporates a considerable degree of security by design. it is equally fair to say that Apple has been anything but open and transparent about the details of its implementation. Frankly, you just don't know enough to say it is cutting edge security. The goto fail blunder however speaks volumes about the inadequacy of their code reviews and security testing. Frankly that was a fail of monumental proportions because it was so f*n obvious to anyone who can read code. There was NOTHING subtle about it. There are other examples.

Let's turn to the Hype about the highly secure touch-id implementation. It was cracked in a couple of days by simple procedures, already well known and documented before Apple released the product. Now I take the position that if a company knowingly spouts bullshit regarding security, then they are not to be trusted. And remember .. its not open source so the real experts cannot independently review the security programming.

Finally, there is a long-standing and unchanged lack of transparency and responsiveness by Apple with regard to responding to the user community on security issues turned up regularly by third party analysts. By comparison, Microsoft has completely transformed its policies in this respect (no I do NOT like MS) and is much more open, faster to respond and to patch.

Turning to privacy. I need to point out perhaps that I am a full time European Privacy Officer in a multinational company. More I will not say to my own professional background. In Germany Apple was forced to change its privacy policy statement in no less than 15 specific respects because of noncompliance with the law in this country.

Secondly, if you actually READ the Apple Privacy policies you will not that they define Personal Information only as information containing DIRECT personal identifiers. This is absolutely incompatible with at least 40 years of internationally accepted definitions which since around 1950 (!!) have defined personal information as any information directly or indirectly linked or linkable to natural (living) persons.

Why is this a real issue ? Because Apple reserves the right without limitation to use information that by APPLE's definition is not personal information, in any way they see fit. So basically the Apple privacy policy is defined to protect APPLE in the US against class-action lawsuits. If APPLE was in any way serious it would change its definition to include indirectly identifiable information. For over a generation and a half it has been increasingly simple with increasingly sophisticated statistical and other technology to create indirect links that completely expose and eliminate the privacy of the affected persons. Google specializes in exactly this area (among others). The NSA and other 3-letter US Agencies also specialize in this technology. It really is time to stop being naive about privacy vs profit orientation. When it comes to a buck to be made, Apple will not hesitate to trick and deceive the users into a false sense of security. And for the record, apple is by no means the only offender in this respect.

Finally, to save any stupid debate, in the entire EU the right to privacy is a constitutional right. Your personal information belongs to YOU and not to any company that may want to collect it. Its important for americans to understand that. In germany it was ruled by the constitutional court in 1988 that there is a fundamental right to informational self-determination.

Does that help you get your head around the issues ??

I can't be bothered to address most of your provocative-looking post. But I will the bit about Touch-ID.

I closely followed the story of the supposed cracking of Apple's fingerprint technology, and having examined the details, I would hardly call it 'cracking' at all. The conditions that the 'hackers' used were so artificial and unlikely to happen in real life, that one cannot realistically extrapolate their experiment to real-world use. Also, too many variables were not released to the public, so it's impossible to verify the accuracy. But from what we do know, I can say this: the fingerprints were carefully removed from a pristine phone. In real life, it's highly unlikely that you would have a perfect fingerprint on your phone-it would be covered in smudges and smears. You would also need to know exactly which finger or thumb you were lifting. Remember, you only have limited attempts before requiring the password. You also have limited time to enable the hack before requiring the password. Finally, the video that was made never actually showed the whole process of removing the fingerprint and using the artificial print to unlock the phone; all we saw are brief glimpses, all of which could easily have been faked, not to mention all the stages in between.

I can't be bothered to address your argument either, except perhaps to comment that if the best you can do is to assume it is a fake, then you're on thin ice. But believe what you want and act accordingly.

Now let's focus for the moment on Apple and Security. While I think it's fair to say that the Unix foundation of OSX is a great advantage since it incorporates a considerable degree of security by design. it is equally fair to say that Apple has been anything but open and transparent about the details of its implementation. Frankly, you just don't know enough to say it is cutting edge security. The goto fail blunder however speaks volumes about the inadequacy of their code reviews and security testing. Frankly that was a fail of monumental proportions because it was so f*n obvious to anyone who can read code. There was NOTHING subtle about it. There are other examples.

<snip>

It's a bug (like goto fail or the OpenSSL problem). I think it's a bit unfair to assume that Apple does not care about security just because it has a bug. Everybody has bugs.

I also think it's a bit of a stretch to blame this on Apple's OS being proprietary. It's unfair to say that they're not transparent on security: the strategy is here; not revealing a security bug until a fix is ready is standard industry practice (don't give the bad guys a head start, they'll reverse engineer the fix anyway but that's better than a zero-day attack). It's a stretch to say this is down to iOS being proprietary since being open-source is no better guarantee of error-free security code (obvious example being Heartbleed, currently plaguing the web because a piece of open-source code was inadequately reviewed about 2 years ago and many servers have undetectably been leaking data ever since).

I would also echo SolipsismX and asdasd in saying that Apple's profit motive is precisely consistent with providing good security. iOS has a sophisticated security architecture that is a part of the iOS devices being high quality, which underpins Apple's ability to charge a premium price and hence make more profit. The fact that this is often overlooked (by a lot of people, some that should know better) does not affect the fact that the design intent is to secure all data at rest (ie stuff in the Flash memory) just like seriously secure systems do and to do it transparently and efficiently by using hardware AES encryption (in all iOS devices since iPhone 3GS). That currently has a vulnerability for a particular category of files; when it's fixed, the document I referenced will again describe the actual behaviour.

There are downsides to both proprietary OSs and closed ecosystems. I don't think this is an example.

Its not the fact that there was a bug that is the problem. The fact that the bug, which is REALLY EASY to detect by automated code-tracing. made its way into a productive release demonstrates that the pre-release security checking is way below what one can expect from a company with the resources that Apple has. If they were as serious about security as they pretend to be, this would never have slipped through.

OK , now you're really just acting wilfully stupid. Read my comment again. Nowhere did I say that Profit and security are mutually exclusive, but if you want to call it that I would tend to agree simply because Security requires effort, manpower and investment, which means more expenses, which means more profit. So they are not mutually exclusive, but opposing elements.

Secondly, I never said that Apple doesn't do anything, as you claim. I did imply that it doesn't do enough.

But your tactic of erecting straw-man arguments and putting words into my mouth is as transparent as it is infantile.

So, for the intellectually challenged you might like to learn to read carefully, to discuss rationally and to stop being an asshole.

1) Keep trying but personal attacks won't work.

2) Learn what a strawman is.

3) You left zero room for Apple or anyone to be interested in profits and wanting improve security — something already proven — as a manner of achieving that goal. You made the two mutually exclusive with your poorly chosen words.

This bot has been removed from circulation due to a malfunctioning morality chip.

Its not the fact that there was a bug that is the problem. The fact that the bug, which is REALLY EASY to detect by automated code-tracing. made its way into a productive release demonstrates that the pre-release security checking is way below what one can expect from a company with the resources that Apple has. If they were as serious about security as they pretend to be, this would never have slipped through.

I can't disagree with that. I expect the compiler even warned the programmer about the dead code (not sure I've ever used a goto in C so I'm not sure), never mind testing. The regrettable truth is that much software from everyone is rushed and so ends up with silly bugs (except they become serious in cases like this). However, I expect that Apple will continue to fix security issues and that the ecosystem will remain relatively secure (my Airport Extreme had a Heartbleed-related update the other day that I think was unheralded - that's how it needs to happen).

I assume they'd be using what Apple calls the Apple LLVM compiler (standard in Xcode) which I think is Clang. However, I've just put a goto and a label around some code and you're quite right that there is no warning. Thanks for pointing it out, I've always thought that dead code usually represents a programming error and is easily detected during optimisation...