Tag: security

If Apple can blow it, so too can the rest of us. That’s why a layered defensive approach is necessary.

When we talk about secure platforms, there is one name that has always risen to the top: Apple. Apple’s business model for iOS has been repeatedly demonstrated to provide superior security results over its competitors. In fact, Apple’s security model is so good that governments feel threatened enough by it that we have had repeated calls for some form of back door into their phones and tablets. CEO Tim Cook has repeatedly taken the stage to argue for such strong protection, and indeed I personally have friends who I know take this stuff so seriously that they lose sleep over some of the design choices that are made.

And yet this last week, we learned of a vulnerability that was as easy to exploit as to type “root” twice in order to gain privileged access.

Wait. What?

Ain’t no perfect.

If the best and the brightest of the industry can occasionally have a flub like this, what about the rest of us? I recently installed a single sign-on package from Ping Identity, a company whose job it is to provide secure access. This simple application that generates cryptographically generated sequences of numbers to be used as passwords is over 70 megabytes, and includes a complex Java runtime environment (JRE). How many bugs remain hidden in those hundreds of thousands of lines of code?

Now enter the Internet of Things, where manufacturers of devices that have not traditionally been connected to the network have not been expert at security for decades. What sort of problems lurk in each and every one of those devices?

It is simply not possible to assure perfect security, and because computers are designed by imperfect humans, all these devices are imperfect. Even devices that we believe are secure today will have vulnerabilities exposed in the future. This is one of the reasons why the network needs to play a role.

The network stands between you and attackers, even when devices have vulnerabilities. The network is best in a position to protect your devices when it knows what sort of access a device needs to operate properly. That’s your washing machine. But even for your laptop, where you might want to access whatever you want to access, whenever you want to access it, through whatever system you wish to use, informing the network makes it possible to stop all communications that you don’t want. To be sure, endpoint manufacturers should not rely solely on network protection. Devices should be built with as much protection as is practicable and affordable. The network provides an additional layer of protection.

Endpoint manufacturers thus far have not done a good job in making use of the network for protection. That requires a serious rethink, and Apple is the posture child as to why. They are the best and the brightest, and they got it wrong this time.

Pew should evolve the questions they are asking and the advice they are giving based on how the threat environment is changing. But they should keep asking.

Last year, Pew Research surveyed just over 1,000 people to try to get a feel for how informed they are about cybersecurity. That’s a great idea because it informs us as a society as to how well consumers are able to defend themselves against common attacks. Let’s consider some ways that this survey could be evolved, and how consumers can mitigate certain common risks. Keep in mind that Pew conducted the survey in June of last year in a fast changing world.

Several of the questions related to phishing, Wifi access points and VPNs. VPNs have been in the news recently because of the Trump administration’s and Congress’ backtracking on privacy protections. While privacy invasion by service providers is a serious problem, accessing one’s bank at an open access point is probably considerably less so. There are two reasons for this. First, banks almost all make use of TLS to protect communications. Attempts to fake bank sites by intercepting communications will, at the very least produce a warning that browser manufacturers have made increasingly difficult to bypass. Second, many financial institutions make use of apps in mobile devices that take some care to validate that the user is actually talking to their service. In this way, these apps actually mark a significant reduction in phishing risk. Yes, the implication is that using a laptop with a web browser is a slightly riskier means to access your bank than the app it likely provides, and yes, there’s a question hiding there for Pew in its survey.

Another question on the survey refers to password quality. While this is something of a problem, there are two bigger problems hiding that consumers should understand:

Reuse of passwords. Consumers will often reuse passwords simply because it’s hard to remember many of them. Worse, many password managers themselves have had vulnerabilities. Why not? It’s like the apocryphal Willie Sutton quote about robbing banks because that’s where the money is. Still, with numerous break-ins, such as those that occurred with Yahoo! last year*, and the others that have surely gone unreported or unnoticed, re-use of passwords is a very dangerous practice.

Aggregation of trust in smart phones. As recent articles about American Customs and Border Patrol demanding access to smart phones demonstrate, access to many services such as Facebook, Twitter, and email can be gained just by gaining access to the phone. Worse, because SMS and email are often used to reset user passwords, access to the phone itself typically means easy access to most consumer services.

One final area that requires coverage: as the two followers of my blog are keenly aware, IoT presents a whole new class of risk that Pew has yet to address in its survey.

The risks I mention were not well understood as early as five years ago. But now they are, and they have been for at least the last several years. Pew should keep surveying, and keep informing everyone, but they should also evolve the questions they are asking and the advice they are giving.

* Those who show disdain toward Yahoo! may find they themselves live in an enormous glass house.

When Edward Snowden disclosed the NSA’s activities, many people came to realize that network systems can be misused, even though this was always the case. People just realized what was possible. What happened next was a concerted effort to protect protect data from what has become known as “pervasive surveillance”. This included development of a new version of HTTP that is always encrypted and an easy way to get certificates.

However, when end nodes hide everything from the network, not only can the network not be used by the bad guys, but it can no longer be used by the good guys to either authorize appropriate communications or identify attacks. A example is spam. Your mail server sits in front of you and can reject messages when they contain malware or are just garbage. It does that by examining both the source of the message and the message itself. Similarly, anyone who has read my writing about Things knows that the network needs just a little bit of information from the device in order to stop unwanted communications.

I have written an Internet Draft that begins to establish a framework for when and how information should be shared, with the idea being that information should be carefully shared with a purpose, understanding that there are risks involved in doing so. The attacks on Twitter and on krebsonsecurity.com are preventable, but it requires us to recognize that end nodes are not infallible, and they never will be. Neither, by the way, are network devices. So long as all of these systems are designed and built by humans, that will be the case. Each can help each other in good measure to protect the system as a whole.

It’s a common belief that Apple has gone to extraordinary lengths to protect individuals’ privacy through mechanisms such as Touch ID, but what are its limits? Today Forbes reported that a U.S. attorney was able to get a warrant for the fingerprints of everyone at a particular residence for the express purpose of unlocking iPhones.

Putting aside the shocking breadth of the warrant, suppose you want to resist granting access to an iPhone. It is not that hard for someone to force your finger onto a phone. It is quite a different matter for someone to force a password out of your head. Apple has gone to some lengths to limit certain forms of attack. For instance, the Touch ID generally will not authenticate a severed finger, nor will it authenticate a fingerprint copy. Also, Apple doesn’t actually store fingerprint images, but rather hashes of the information used to collect fingerprints. Note that if the hashing method is known, then the hash itself is sensitive.

For those who care, the question is what length someone is likely to go to gain access to a phone. Were someone holding a gun to my head and demanding access to my phone, unless it meant harming my family, I’d probably give them the information they wanted. Short of that, however, I might resist, at least long enough to get to have my day in court. If that would be your approach, then you might want to skip Touch ID, lest someone simply gets rough with you to get your fingerprint. The problem is that Touch ID cannot currently be required in combination with a pass code on iPhones and iPads. Either suffices. And this goes against the a basic concept of two-factor authentication. Combine something you have, like a fingerprint, with something you know, like a pass code.

It is not clear what capabilities Yahoo! already has, but it would not be unreasonable to expect them to have the ability to scan incoming messages for spam and malware, for instance. What’s more, we are all the better for this sort of capability. Consider that around 85% of all email is spam, a small amount of which contains malware, and Yahoo! users don’t see most of that. Much of that can be rejected without Yahoo! having to look at the content by just examining the source IP address of the device attempting to send Yahoo! mail, but in all likelihood they do look at some, as many systems do. In fact one of the most popular open source systems in the early days known as SpamAssassin did just this. The challenge from a technical perspective is to implement such a mechanism without the mechanism itself having a large target surface.

If the government asking for certain messages sounds creepy, we have to ask what a signature is. A signature normally refers to characteristics of a communication that would either identify its source or that it has some quality. For instance, viruses all have signatures. In this case, what is claimed is that terrorists communicated in a certain way such that they could be identified. According to The Times, the government demonstrated probably cause that this was true, and that the signature was “highly unique”*. That is, the signature likely matches very few actual messages that the government would see, although we don’t know how small that number really is. Yahoo! has denied having a capability to scan across all messages in their system, but beyond that not enough is public to know what they would have done. It may well not have been reasonable to search specific accounts because one can easily create an account, and the terrorists may have many. The government publicly revealing either the probable cause or the signature would tantamount to alerting terrorists that they are in fact investigation, and that they can be tracked.

The risk to civil liberties is that there are no terrorists at all, and this is just a fishing expedition, or worse, persecution of some form. The FISC and its appellate courts are intended to provide some level of protection against abuse, but in all other cases, the public as a view to whether that abuse is actually occurring. Many have complained about a lack of transparent oversight of the FISC, but the question is how to have that oversight without alerting The Bad Guys.

The situation gets more complex if one considers that other countries would want the same right to demand information from their mail service providers that the U.S. enjoys, as Yahoo’s own transparency report demonstrates.

In short we are left with a set of difficult compromises that pit gathering of intelligence on terrorists and other criminals against the risk of government abuse. That’s not Yahoo!’s fault. This is a hard problem that requires thoughtful consideration of these trade offs, and the timing is right to think about this. Once again, the Foreign Intelligence Surveillance Act (FISA) will be up for reauthorization in Congress next year. And in this case, let’s at least consider the possibility that the government is trying to fulfill its responsibility of protecting its citizens and residents, and Yahoo! is trying to be a good citizen in looking at each individual request on its merits and in accordance with relevant laws.

* No I don’t know the difference between “unique” and “highly unique” either.