At least six months after being alerted to a possible exploit in the way users connect to its App Store, Apple has added encryption to users' connections, plugging the security hole.

Google security researcher Elie Bursztein said on his blog that he alerted Apple of the potential for attack in July of last year, with the flaw affecting users connected to the App Store, reports CNet. The vulnerability Bursztein pointed out was made possible by Apple's use of the non-encrypted HTTP protocol instead of HTTPS for certain parts of communication with the App Store.

Bursztein pointed out that, in theory, a malicious network attacker could exploit the use of HTTP to steal user passwords, force users to install a specific app instead of the one they were looking for, trick users into downloading fake app upgrades, prevent application installation, or scan the apps on a user's device.

Bursztein published a number of videos detailing how the attacks might work, as well as additional technical details on the attack methodology earlier this year.

In an Apple Web Server notifications update published on Feb. 23, Apple addressed the issue. Active content is now served over HTTPS by default. Apple acknowledged Bursztein for pointing out the issue, as well as Bernhard Brehm of Recurity Labs and Rahul Iyer of Bejoi LLC.

The App Store has periodically been the target of attacks in the past. In 2010, Apple tightened security for the online marketplace following incidences of account fraud that saw some users hit with several hundred dollars worth of erroneous charges.

In April of 2012, Apple again updated its security protocols, adding a measure requiring users to fill out security questions to be associated with their accounts. In the event that a user signs on from a new device, iTunes and the App Store now requires them to answer the questions in order to verify their identity.

Apple has been very responsive to security threats lately to an impressive level.

This is a low risk vulnerability:

App developers have the opportunity to encrypt their data using an API provided by Apple.
Listener must be on the same (likely public) Wi-Fi network.
The HTML would need to be prepared in advance considering that the request is temporary.

Why would simply enabling SSL take so long? Not saying it was high risk, but it doesn't seem complicated either.

There are potential issues with implementing SSL and such potential issues must be considered and possibly resolved such as SSL through NAT. One must also consider that the code must be compatible with (or at least not break) multiple versions of iOS, Mac and PC software.

I can only speak to my experience but my experience is pretty typical. Here are some of the steps taken to resolve a defect in a software product:

1. Defect reported (security vulnerability)
2. Defect verified and reproduced
3. Defect forwarded to Defect Management Team
4. Defect Management Team needs more information (triage delayed until next week)
5. Defect Management Team forwards to iTunes team
6. iTunes team forwards to Security team
7. Security team reproduces
8. Security team evaluates and recommends a "low threat" status with "low effort"
9. Defect Management Team triages based on Security team's recommendation (now one weeks later) (two weeks total)
10. Eventually, the defect reaches the water mark (twelve weeks later or 4 development cycles later) (fourteen weeks total)
11. Security team is scheduled
12. QA team is scheduled
13. Security team resolves the defect (delivered to QA in a revision three weeks later per the development cycle) (seventeen weeks total)
14. QA team tests the defect (testing is complete three weeks later per the development cycle) (twenty weeks total)
15. Revision is delivered to the iTunes NOC with a quarterly revision (twelve weeks later) (thirty two weeks total)
16. iTunes NOC schedules the revision deployment for six weeks later (based on the low threat status)
17. iTunes NOC deploys new version of iTunes server to hundreds of servers

There are potential issues with implementing SSL and such potential issues must be considered and possibly resolved such as SSL through NAT.

Still, think how many transactions a site like Amazon has to deal with. Encrypted communication with a store should have been in place at all times. It's about time all internet communication was encrypted by default. I think ipv6 was supposed to help here but I don't know why they had to make ipv6 addresses so complex. I don't think we need that many possible addresses, it really should have been easy enough to tack on a 4 digit hex code at the start of ipv4 to give 281 trillion possible addresses e.g fefe:198.277.0.12 with existing codes starting with ffff: as opposed to FE80:0000:0000:0000:0202:B3FF:FE1E:8329 or even just use 3 or 4 hex codes like fefe:8bbe:9001.

before the change to https... all you would need for the password find, is wireshark and shared internet connection such as a computer with wireshark (capturing packets In promiscuous mode) on your computer, while using app store then you use the Protocol dump, and look though the dump for the password.

i did this about ten years ago when i forgot my email password. (does not work with web mail because it is Encrypted)

Still, think how many transactions a site like Amazon has to deal with. Encrypted communication with a store should have been in place at all times. It's about time all internet communication was encrypted by default. I think ipv6 was supposed to help here but I don't know why they had to make ipv6 addresses so complex. I don't think we need that many possible addresses, it really should have been easy enough to tack on a 4 digit hex code at the start of ipv4 to give 281 trillion possible addresses e.g fefe:198.277.0.12 with existing codes starting with ffff: as opposed to FE80:0000:0000:0000:0202:B3FF:FE1E:8329 or even just use 3 or 4 hex codes like fefe:8bbe:9001.

I agree that SSL should be used by default, or at least for sending login credentials. Google got in some hot water a few years ago for sending authentication tokens in the clear from the contacts and calendar app on Android (and, as I recall, the commenters on AI were pretty merciless about that issue; funny how many of them seem to have changed their tune when it's Apple in the hot seat).

I'm not sure of any issue with SSL and NAT. As far that a client-side NAT router is concerned, it's just another TCP stream. Server-side there could be issues if Apple chose to terminate SSL on a load-balancer that was doing NAT, but there are alternate configurations that wouldn't have issues.

Regarding IPv6, the addressing scheme really isn't that complex. It's essentially a 64-bit network identifier and a 64-bit host identifier (if you use the recommended /64 subnetting). The host ID was made so large to give flexibility -- it can be generated from a MAC address or bound to a cryptographic key. Further, I don't see how your proposed address format would have been easier to implement that IPv6 -- once you change the address size and textual representation you effectively have to update every device on the Internet.

Regarding IPv6, the addressing scheme really isn't that complex. It's essentially a 64-bit network identifier and a 64-bit host identifier (if you use the recommended /64 subnetting). The host ID was made so large to give flexibility -- it can be generated from a MAC address or bound to a cryptographic key. Further, I don't see how your proposed address format would have been easier to implement that IPv6 -- once you change the address size and textual representation you effectively have to update every device on the Internet.

It's harder to deal with hex codes manually. Imagine if they changed telephone numbers to IPv6 format because we ran out of numbers. They just add a small set of numbers. IPv6 is clearly more difficult to deal with manually:

The use of NAT is also discouraged, which means everyone is encouraged to use an address that directly identifies a computer on the internet. It's worse that it can be generated from the MAC address as it's a privacy concern, even though there are steps taken to avoid that:

It's crazy that they'd even do that - essentially hand every website on the internet a permanent, global tracking cookie.

While avoiding NAT will help issues with protocols like Facetime or Back to my Mac, it means that home, business and university computers will get a directly addressable IP, which is way easier to attack than if it was behind a NAT. People keep saying this is a myth but it does provide a very easy layer of security because if you have file sharing enabled, it means that nobody can try to access your share using the public IP that websites see because the router doesn't know how to forward the request. Same with network printers. With NAT, you can give it a unique, static, internal IP and it's easy to find but nobody knows about it externally. Obviously people can and will have to setup routers and firewalls to stop this happening but it makes it harder and if people resort to using NAT anyway, what was the point in using a 128-bit address? Wouldn't a 64-bit address have sufficed in either case - 1.8x10^19? 48-bit network id, 16-bit host id.

Anyway, it's done now so I guess we just have to live with it but I think it's going to hold back adoption:

Just over 1% adoption in over 3 years doesn't look very good. If it was clear on how to make the transition, we'd probably all have encrypted communication by default. Instead, a switch just gets flipped and all of a sudden people have to deal with network addresses that can't be managed manually and need entirely new tools and firewall rules so the inevitable step happens that they look for ways to turn it off.

I have to disagree with much of that. I've been an IPv6 deployment lead at my employer for several years, and, frankly, it's not that hard to work with.

NAT is not a security technology. It was never designed to be. IPv6 doesn't change that. The use of EUI-64 addresses isn't really a privacy concern, and the so-called privacy addresses don't actually increase privacy and make network management significantly more complicated. The idea with "privacy" addresses is to generate a pseudo-random 64-bit hostid rather than deriving it from the MAC address. But no one really has anonymous Internet access -- your ISP (which assigns your network ID) certainly knows who you are. And the widespread use of tracking cookies and browser fingerprinting removes any benefit to pseudo-random IP addresses. It's just handwaving.

Further, many university computers already have publicly routable IPv4 addresses (I should know, I work at one). If that concerns you, then you should use actual security technologies to mitigate your risks. Or, you can use an internally routable IPv6 address (called a ULA), for printers, management interfaces, etc.

IPv6 adoption is growing. Verizon's entire LTE network supports IPv6, for example. In BGP, IPv6 adoption by ASes has grown superlinearly for years.

I have to disagree with much of that. I've been an IPv6 deployment lead at my employer for several years, and, frankly, it's not that hard to work with.

Don't you ever have to manually type in the long addresses? Compared to IPv4 would you rate it harder, easier or the same to deal with on a daily basis? Also in terms of developing effective firewall rules. Won't the ubiquity of dedicated IPs make DDOS and spam an even worse problem because it reduces the effectiveness of blacklists?

Quote:

Originally Posted by derekmorr

NAT is not a security technology. It was never designed to be. IPv6 doesn't change that.

Say that an average computer buyer with file sharing enabled gets an IPv6 setup and signs up to a website service and uses the same password as they do for their admin account on their computer and the server logs their IPv6 address. Then say that this database is hacked and stolen. The person with the stolen database would potentially be able to directly access that person's hard drive using that information. Behind NAT, which is the usual setup over wifi, they wouldn't.

Obviously it's insecure of people to do that in the first place but publicly routable IP addresses as standard does change this and makes things less secure, regardless of the original security being unintentional.

Quote:

Originally Posted by derekmorr

The use of EUI-64 addresses isn't really a privacy concern, and the so-called privacy addresses don't actually increase privacy and make network management significantly more complicated. The idea with "privacy" addresses is to generate a pseudo-random 64-bit hostid rather than deriving it from the MAC address. But no one really has anonymous Internet access -- your ISP (which assigns your network ID) certainly knows who you are. And the widespread use of tracking cookies and browser fingerprinting removes any benefit to pseudo-random IP addresses. It's just handwaving.

I agree that the ISP knows most of what servers are being connected to but it's unlikely that they will care as they deal with too much traffic. It's a privacy concern when that information is placed in the control of any website owner. Tracking cookies can be removed far more easily than the MAC address of a machine and they don't have to ask permission to use IP information because they aren't storing anything on your computer.

Copyright law enforcement will love being able to id hardware from an IP address though and advertisers will no longer have to jump through hoops trying to id computers over long periods of time. I don't personally think that tracking cookies are all that harmful but I can see a lot of harm in giving website owners hardware identifiers.

Don't you ever have to manually type in the long addresses? Compared to IPv4 would you rate it harder, easier or the same to deal with on a daily basis?

Rarely. When I do, I can usually copy and paste them. Or use DNS. Also, not all IPv6 addresses are long. For example, here's one: 2610:8:6800:1::35. In IPv6, if you have a large range of continuous zeros you can write them as a double colon ( :: ). So you could write that address as 2610:0008:6800:0001:0000:0000:0000:0035, but who wants to type all of that?

Quote:

Originally Posted by Marvin

Won't the ubiquity of dedicated IPs make DDOS and spam an even worse problem because it reduces the effectiveness of blacklists?

There have been some problem with v6 RBLs for spam, but there are better options for spam control than that. Honestly, I've been running native IPv6 in production on public-facing services for about 7 years and this hasn't been an issue.

Quote:

Originally Posted by Marvin

Say that an average computer buyer with file sharing enabled gets an IPv6 setup and signs up to a website service and uses the same password as they do for their admin account on their computer and the server logs their IPv6 address. Then say that this database is hacked and stolen. The person with the stolen database would potentially be able to directly access that person's hard drive using that information. Behind NAT, which is the usual setup over wifi, they wouldn't.

Only if the user isn't running a firewall or has an exception to open their file-sharing service open to the world. This is discussed in more detail in RFC 4864, 6092, and 6204. Essentially, your home v6 router should run a firewall.

Quote:

Originally Posted by Marvin

I don't personally think that tracking cookies are all that harmful but I can see a lot of harm in giving website owners hardware identifiers.

Well, then you'll be happy to know that current versions of OS X, iOS, Android, and Ubuntu have "privacy" addresses enabled by default. Just be aware that there's no real privacy -- your ISP assigns the front 64 bits of your address. You can randomize the back 64 bits all you like. If push comes to shove, someone can get a warrant, contact your ISP, and find out who you are. Likewise, broswer fingerprinting has a pretty good success rate.

Listener must be on the same (likely public) Wi-Fi network.
The HTML would need to be prepared in advance considering that the request is temporary.

I'm not sure that is the case, open networks send data unencrypted, so it can be listened, you don't need to join the network to see the packets.

Or you can potentially spoof the access point & man in the middle the client. It really depends on how you want to exploit the flaw. It may be enough to just grab the passwords & then buy a few apps or make some purchases at the local Apple Store, assuming you can get the account setup onto a device. I guess you could also setup a malicious server to automatically send data to the client, it's trivial to process the request & send back whatever response to the client.

If MacBook Pro's 17 steps are what it takes to get these things fixed Apple should think more about how it deals with security holes. Once step 3 is reached it is pretty clear that SSL should be enabled by default, pass that to the iTunes team to enable, test & deploy. It makes me wonder how many of these issues go unreported & get sold & used by the black hats.