Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

wiredmikey writes "Users of iOS devices will find themselves with a new software update to install, thanks to a certificate validation flaw in the mobile popular OS. While Apple provides very little information when disclosing security issues, the company said that an attacker with a 'privileged network position could capture or modify data in sessions protected by SSL/TLS.' 'While this flaw itself does not allow an attacker to compromise a vulnerable device, it is still a very serious threat to the privacy of users as it can be exploited through Man-in-the-Middle attack,' VUPEN's Chaouki Bekrar told SecurityWeek. For example, when connecting to an untrusted WiFi network, attackers could spy on user connections to websites and services that are supposed to be using encrypted communications, Bekrar said. Users should update their iOS devices to iOS 7.0.6 as soon as possible." Adds reader Trailrunner7: "The wording of the description is interesting, as it suggests that the proper certificate-validation checks were in place at some point in iOS but were later removed somehow. The effect of an exploit against this vulnerability would be for an attacker with a man-in-the-middle position on the victim's network would be able to read supposedly secure communications. It's not clear when the vulnerability was introduced, but the CVE entry for the bug was reserved on Jan. 8."

How does that work? It seems that you need to get iOS 7 to get the patch. Did they back-port it to iOS 6? Or do they have some mechanism like Google does for updating older versions via their app store?

iOS 6 isn't even 18 months old yet and was their Windows Vista, so a lot of people didn't upgrade. iOS 7 isn't even 6 months old, had security problems of its own at launch, and runs like a limping dog on some very popular devices still in widespread use, so a lot of people didn't upgrade to that either.

The vulnerability here was caused by a rookie error that could easily have been found and fixed by following any one of several best practices in their software development process, and for something security-related they should have been following all of them.

Really, why would you trust a system where someone you dont know or trust is in charge of the private keys for the encryption?

Failing to secure your connections is just wilfully stupid. SSL is the best option we've got to avoid a whole range of attacks (it's far easier than the alternatives) but it does require correct implementations and careful selection of who to trust; SSL itself does not specify who to trust (and how can it? It's just a technical protocol for how to establish a secure channel over an insecure connection, typically implemented with TCP/IP).

Explicitly listing who to trust is best, but it's extremely hard to sca

I am not saying we should run unsecured connections or that there is a single type of threat. And your post ends up sounding like a bunch of excuses as to why we have no choice but to trust something like SSL because its the only thing we got and we are stuck with it.

Private keys are always ONLY in the hands of the communicating parties. You never transmit your private key to anyone, if you're doing things right.

Third party Cert Authorities sign a certificate fingerprint. They establish a "trust chain" where if you trust the CA to do verification of identity, then you can trust the identity of the signed certs. The CA never sees the private key to do this trust certification.

The software Apple distributes to users is proprietary, even if part of that software is built from free software. Proprietary software is never safe for users. Its safety is for the proprietor—what the program allows the proprietor to do to the users.

Apparently memories around here are so short people can't remember when researchers showed Apple can read iMessages [slashdot.org] anytime Apple wants and the users have no idea which messages are being read. Whether anyone at Apple reads someone's iMessages is a detai

The point you fail to understand is that with software that respects a user's freedom, one doesn't need to wait for someone else to fix the bug for them and then hope that bug actually gets fixed when the ostensible fix is released. Users running nothing but free software have options to fix any bug and verify that fix which proprietary software disallows.

The rest of your statement is a form of false dichotomy—arguing from perfection. I never said anything was perfect.

The bug is that the cn hostname from the certificate is not verified. So it's possible to use your own website SSL cert as a cert for any other site and Apple devices will accept it no question. Of course, to exploit, you'd need to modify a tool like webmitm to serve a fixed certificate.

Very very dangerous, seems to be a result of switching away from OpenSSL although details are still flaky.

I was curious about Apple deprecating OpenSSL so went looking into it.

Apparently OpenSSL doesn't keep a stable API between versions so developers should either static link it (ensuring eventual security problems if they are shipping out of date versions) or use something else if dynamic linking since Apples software updates would update OpenSSL and break apps using it.

Apple never "switched away" from openssl, they shipped their own implementation with the very first version of Mac OS X. They only packaged openssl with the system for other apps to use. I actually rewrote the XMPP encryption stuff in Adium to use the security framework instead of openssl way back in 2007, since that allowed me to use the built-in system dialogs for presenting certificates.

That's not the bug at all. It has nothing to do with hostname verification whatsoever. In fact, traditional RSA+AES CBC connections are not affected at all. Only ephemeral, or so-called "perfect forward secrecy" key exchanges are affected.

However because cipher choice and handshake choice are left to the server (picking from a set that the client advertises) in the protocol, a malicious server can always pick the vulnerable key exchange and exploit it.

?! Any decent compiler would throw up a warning for unreachable code. I know that the whole "many eyes" thing is a miss and an audit of any open source code will usually pick up trivial problems, but that one doesn't even make any sense unless Apple coding practices are so poor that no sensible person would touch their products.

Uhm... so you don't know whats broken when you've been given a link to the source in question? But on Android you magically do?

So are you just retarded and only can read source for Android and no other purpose or just too stupid to realize that the source code in question here is VERY open... and fucking linked a few posts above your for fucks sake.

So does Clang in current versions of Xcode. No one uses GCC on Apple anymore so its really irrelevant. We've moved on to compilers that don't suck ass. GCC remains for fanboys but thats about it.

The person you're replying to hasn't used Xcode for a very long time apparently.

Clang bitches about any silly coding mistake pretty much, not just unreachable code due to a return or some sort of jmp over code, but also warnings about things such as not catching every case in a switch on an enum.

As does gcc with the right flags, which detect "unreachable code". That can often generate a storm of warnings that require a great deal of additional time to sort through and resolve, and can be different depending on compile time definitions. So it's not often used.

Sorry, it's not there. It's silently ignored, which is the WORST POSSIBLE SOLUTION to the problem. Ugh.

The -Wunreachable-code has been removed, because it was unstable: it
relied on the optimizer, and so different versions of gcc would warn
about different code. The compiler still accepts and ignores the
command line option so that existing Makefiles are not broken. In some
future release the option will be removed entirely.

The linked file uses both braces for single line ifs and without braces for single line IF's... that is a really bad sign as a developer in my opinion. That means they aren't enforcing style guidelines and that makes code a fuckton harder to read. I can adapt to either style fairly easy, but when you mix it my mind gets wonky, my mind tends to find it ambiguous too!

If you look at the actual source and think a moment, its pretty clear and isn't entirely illogical, though I admit, it is broken.

The snippet you are replying too looks stupid standing on its own, in context it doesn't stand out nearly as well, and there is absolutely no way you can tell by that snippet if there is any unreachable code or not, you have no idea from the snippet whats after those gotos or what those gotos do.

Curious. This would seem to result in a failure every time. Without reading the code further - how could auth ever succeed? Or did it ignore the failure return code and relied on hash update results anyway?

Switching away from OpenSSL that is widely used and audited for generations of releases to homegrown crypto is a mistake on Apples part. This is most certainly not the last security flaw in their code we will see.

"goto fail" is misleading, imho. It's more "goto finally". After cleaning up, the function returns the current value of err, which would after the unconditional "goto fail" be 0, i.e. success.

Yes. Writing code is easy (I do it every day). Writing security-related code is moderately easy if you're mathematically minded. Writing robust security-related code is fucking hard - not just because of all the implementation details, but because humans repeatedly make schoolboy errors. It's just how we are. If your c

The return code from the function is "err", which is the result of the last test that was done inside the function. The second goto is only reached if err is 0 (success), so at that point the function returns success.

Using goto for error handling in C is a well-established idiom [cert.org], and is often the best option available. The alternatives either involve deeply nested if statements, or duplicated cleanup code.

Apple's strategy is to release an OS, wait a bit, then start releasing small incremental updates that ought to have been in the initial release in the first place. I suspect they do this on purpose in order to confound the jailbreakers.

Apple's strategy is to test, test, test and then test the best they can, release multiple beta versions to developers for testing, take a very long time to release new versions, and then patch the missed bugs that show up as fast as they can. Pretty much the way any professional software house does business.

Or are you of the opinion that it's possible to release such a massive amount of code totally bug-free?

The problem is - and we've seen this before - Apple is unnecessarily reinventing the wheel in certain cases. It's the same general problem Microsoft ran into when they decided they wanted to develop their own completely in-house tcp stack earlier this millennium - you're starting from scratch, and you sometimes end up with code that exposes bugs that were patched in the time-tested tools years (or even decades) earlier. Or, as in this case, there are fewer cumulative sets of eyes reviewing your open source

1) I haven't seen GOTO statements since my GWBASIC days, and I've surely never seen this many.2) I really like one-liners for if statements in Ruby: "do_this if x==1"3) Two-liners for C if statements without curly braces feel wrong, are dangerous and hard to read4) http://xkcd.com/292/ [xkcd.com]5) GOTO 1

1) I haven't seen GOTO statements since my GWBASIC days, and I've surely never seen this many.

Then you don't write much low level C code. Error handling in C is possibly the only place where goto statements are useful, because the alternative is deeply nested if statements. Other languages solve it by using try-catch-finally blocks.