Category Archives: information security

Post navigation

A few cards are not straightforward to apply to a webapp situation (some seem assume a proprietary client) – do you recommend discarding them or perhaps you thought of a way to rephrase them somehow?

For example:

“An attacker can make a client unavailable or unusable but the problem goes away when the attacker stops”

I don’t have a great answer, but I’m thinking someone else might have taken it on.

For Denial of Service attacks in the Microsoft SDL bug bar, we roughly to break things down to a matrix of (server, client, persistent/temporary). That doesn’t seem right for web apps. Is there a better approach, and perhaps even one that can translate into some good threat cards?

Yesterday, AT&T announced an Encrypted Mobile Voice. As CNet summarizes:

AT&T is using One Vault Voice to provide users with an application to control their security. The app integrates into a device’s address book and “standard operation” to give users the option to encrypt any call. AT&T said that when encryption is used, the call is protected from end to end.

AT&T Encrypted Mobile Voice is designed specifically for major companies, government agencies, and law enforcement organizations. An AT&T spokesperson said it is not available to consumers. The technology is available to users running BlackBerry devices or Windows Mobile smartphones, and it works in 190 countries.

What’s funny (sad) about this is that there are a number of software encrypted voice systems available. They include RedPhone, CryptoPhone and zFone. Some of these even work on pocket sized computers with integrated radios. But Apple and AT&T won’t let you install alternate voice applications.

A lot of people claim that these restrictions on what you can do with your device just don’t matter very much. That you can really get everything you need. But here’s a clear example of why that isn’t so. Voice encryption is a special app that you have to get permission to run.

Now, maybe you don’t care. You’re “not doing anything wrong.” Well, Hoder wasn’t doing anything wrong when he went to Israel and blogged about it in Farsi. But he’s serving 20 years in jail in Iran.

Now is the time we should be building security in. Systems that prevent you from doing so, or systems that reset themselves to some manufacturer designated default are simply untrustworthy. We should demand better, more trustworthy products or build them ourselves.

[Added: I’d meant to include a comment about Adam Thierer’s comment “The more interesting question here is how “closed” is the iPhone really?” I think the answer is, in part, here. There’s a function, voice privacy, for which AT&T and three other companies think is marketable. And it doesn’t exist on the iPhone OS, which is the 2nd most prevalent phone platform out there.]

[Update 2: Robert and Rob rob me of some of my argument by pointing out that AT&T now allows you to install voice apps, but none of the encrypted voice apps that I’d consider trustworthy are available. (I exlude Skype and their proprietary & secret designs from trustworthy; it’s probably better than no crypto until you trust it, then it’s probably not good enough to really protect you.) Maybe this is a result of the arbitrary rejections by the Apple app store, but when I look for zfone, redphone or cryptophone, I see a fast dial app and some games. When I search for crypto, it’s all password managers. So while I’m no longer sure of the reason, the result remains. The iPhone is missing trustworthy voice crypto, despite the market.]

The first sentence, “use crypto” is a simple one. It means more security requires getting away from sending strings as a way to authenticate people at a distance. This applies (obviously) to passwords, but also to SSNs, mother’s “maiden” names, your first car, and will apply to biometrics. Sending a string which represents an image of a fingerprint is no harder to fake than sending a password. Stronger authenticators will need to involve an algorithm and a key.

The second, “not too confusing” is a little more subtle, because there are layers of confusing. There’s developer confusion as the system is implemented, adding pieces, like captchas, without a threat model. There’s user confusion as to what program popped that demand for credentials, what site they’re connecting to, or what password they’re supposed to use. There’s also confusion about what makes a good password when one site demands no fewer than 10 characters and another insists on no more. But regardless, it’s essential that a a strong authentication system be understood by at least 99% of its users, and that the authentication is either mutual or resistant to replay, reflection and man-in-the-middle attacks. In this, “TOFU” is better than PKI. I prefer to call TOFO “persistence” or “key persistence” This is in keeping with Pollan’s belief that things with names are better than things with acronyms.

Finally, “mostly asymmetric.” There are three main building blocks in crypto. They are one way functions, symmetric and asymmetric ciphers. Asymmetric systems are those with two mathematically related keys, only one of which is kept secret. These are better because forgery attacks are harder; because only one party holds a given key. (Systems that use one way functions can also deliver this property.) There are a few reasons to avoid asymmetric ciphers, mostly having to do with the compute capabilities of really small devices like a smartcard or very power limited devices like pacemakers.

So there you have it: Use crypto. Not too confusing. Mostly asymmetric.

Nature reports that Quantum Cryptography has been completely broken in “Hackers blind quantum cryptographers.” Researcher Vadim Makarov of the Norwegian University of Science and Technology

constructed an attack on a quantum cryptography system that “gave 100% knowledge of the key, with zero disturbance to the system,” as Makarov put it.

There have been other attacks on quantum cryptography, but this is the first in which there is no indication that the key has been stolen. In those attacks, the operator of the system would see the transmission error rate go up, but in Makarov’s attack, the operator sees nothing. In short, they are completely, utterly defeated. The attacker gets everything with impunity.

As usual, the quantum crypto crowd doesn’t see that a 100% loss of key with no inkling of the loss is a problem. Makarov himself said to Nature, “If you want state-of-the-art security, quantum cryptography is still the best place to go.”

Perhaps the kicker is this in Nature’s article:

Ribordy [CEO of ID Quantique] and Zavriyev [Director of R&D at MagiQ] stress that the open versions of their systems that are sold to university researchers are not the same as those sold for security purposes, which contain extra layers of protection. For instance, the fully commercial versions of IDQ’s system also use classical cryptographic techniques as a safety net, says Ribordy.

Huh? We can trust commercial versions of quantum crypto because it uses classical crypto as a safety net? That’s saying that the quantum coolness is really just icing over a VPN. Isn’t it? Am I missing something?

Now it’s time for a rant. Quantum cryptography is really, really cool technology, but the whole point of it is, well, security, and if the state of the art is that the system is breakable, then the art is in a sorry state. It’s a state of being a research toy, not a real security system.

The whole point of quantum crypto is that it isn’t even really crypto. It’s communications that can’t be eavesdropped on. It’s a magical tour-de-force of science and technology. But if it can be silently thwarted, it’s no good. If there is no way that it can be tested to be good, it’s no good. Moreover, the latter is more important than anything else.

For quantum crypto to be viable and trusted, we have to have some way that we know that the boxes were designed and manufactured in such a way that we can be confident that there’s no silent quantum backdoor in the box, then it has no value. You might as well just get a VPN router from the usual suspects and be done with it. If you’re really paranoid, just lay down some glass fiber and put it in a conduit.

Quantum information science as a discipline needs to start taking security seriously. It can’t just brush off a break of this magnitude, and remain credible. Come on, at least admit this is serious and has to be reflected in the manufacturing and testing. Come up with countermeasures, something.

[0] I’ve never understood why this is a comedy of errors, it seems more like a tragedy of errors to me.

Jon Callas of PGP fame wrote the following for the cryptography mail list, which I’m posting in full with his permission:

That is because a tragedy involves someone dying. Strictly speaking, a tragedy involves a Great Person who is brought to their undoing and death because of some small fatal flaw in their otherwise sterling character.

In contrast, comedies involve no one dying, but the entertaining exploits of flawed people in flawed circumstances.

PKI is not a tragedy, it’s comedy. No one dies in PKI. They may get embarrassed or lose money, but that happens in comedy. It’s the basis of many timeless comedies.

Specifically, PKI is a farce. In the same strict definition of dramatic types, a farce is a comedy in which small silly things are compounded on top of each other, over and over. The term farce itself comes from the French “to stuff” and is comedically like stuffing more and more feathers into a pillow until the thing explodes.

So farces involve ludicrous situations, buffoonery, wildly improbable/implausible situations, and crude characterizations of well-known comedic types. Farces typically also involve mistaken identity, disguises, verbal humor including sexual innuendo all in a fast-paced plot that doesn’t let up piling things on top of each other until the whole thing bursts at the seams.

PKI has figured in tragedy, most notably when Polonius asked Hamlet, “What are you signing, milord?” and he answered, “OIDs, OIDs, OIDs,” but that was considered comic relief. Farcical use of PKI is far more common.

We all know the words to Gilbert’s patter-song, “I Am the Very Model of a Certificate Authority,” and Wilde’s genius shows throughout “The Importance of Being Trusted.” Lady Bracknell’s snarky comment, “To lose one HSM, Mr. Worthing, may be regarded as a misfortune, but lose your backup smacks of carelessness,” is pretty much the basis of the WebTrust audit practice even to this day.

More to the point, not only did Cyrano issue bogus short-lived certificates to help woo Roxane, but Mozart and Da Ponte wrote an entire farcical opera on the subject of abuse of issuance, “EV Fan Tutti.” There are some who assert that he did this under the control of the Freemasons, who were then trying to gain control of the Austro-Hungarian authentication systems. These were each farcical social commentary on the identity trust policies of the day.

Mozart touched upon this again (libretto by Bretzner this time) in “The Revocation of the Seraglio,” but this was comic veneer over the discontent that the so-called Aluminum Bavariati had with the trade certifications in siding sales throughout the German states, as well as export control policies since Aluminum was an expensive strategic metal of the time. People suspected the Freemasons were behind it all yet again. Nonetheless, it was all farce.

Most of us would like to forget some of the more grotesque twentieth-century farces, like the thirties short where Moe, Larry, and Shemp start the “Daddy-O” DNS registration company and CA or the “23 Skidoo” DNA-sequencing firm as a way out of the Great Depression. But S.J. Perleman’s “Three Shares in a Boat” shows a real-world use of a threshold scheme. I don’t think anyone said it better than W.C. Fields did in “Never Give a Sucker an Even Break” and “You Can’t Cheat an Honest Man.”

I think you’ll have to agree that unlike history, which starts out as tragedy and replays itself as farce, PKI has always been farce over the centuries. It might actually end up as tragedy, but so far so good. I’m sure that if we look further, the Athenians had the same issues with it that we do today, and that Sophocles had his own farcical commentary.

The National Research Council (NRC) is undertaking a project entitled “Deterring Cyberattacks: Informing Strategies and Developing Options for U.S. Policy.” The project is aimed at fostering a broad, multidisciplinary examination of strategies for deterring cyberattacks on the United States and the possible utility of these strategies for the U.S. government.

To stimulate work in this area, the NRC is offering one or more monetary prizes for excellent contributed papers that address one or more of the questions of interest found in its call for papers, which can be found athttp://sites.nationalacademies.org/CSTB/CSTB_056215

Abstracts of less than 500 words are due April 1, 2010. First drafts are due May 21, 2010, final drafts July 9, 2010. For more information, see the call for papers.

Via a tweet from @WeldPond, I was led to a Daily Mail article which discusses allegations that Facebook founder Mark Zuckerberg “hacked into the accounts of [Harvard] Crimson staff”. Now, I have no idea what happened or didn’t, and I will never have a FB account thanks to my concerns about their approach to privacy, but I was curious about the form of this alleged hacking.

My curiosity was rewarded:

“he allegedly examined a report of failed logins to see if any of the Crimson members had ever entered an incorrect password into TheFacebook.com.

In the instances where they had, Business Insider claimed that Zuckerberg said he tried using those incorrect passwords to access the Crimson members’ Harvard email accounts.”

A few weeks ago, I joined the SearchSecurity team (Mike Mimoso, Rob Westervelt and Eric Parizo) to discuss the top cybersecurity stories of 2009. It was fun, and part 1 now available for a listen: part 1 (22:58), part 2 is still to come.

Boy, am I glad to know they take my privacy seriously, because otherwise, their failure to fill out fields in their certificate might really worry me.

I mean, I’m not annoyed that BNY Mellon treated my information negligently. Oh, no. I expect that. I am a little annoyed that having done so, they offered me a year of “monitoring” rather than prevention. I’m annoyed that it’s a year, when there’s no evidence that risk of harm falls after a year. And I’m annoyed that the company offering monitoring doesn’t bother to get the little things right.

[Update: This may be a broader issue of all non-EV certs being treated like this. I admit, I rarely check because I rarely care. But when I do care, I reasonably expect it to be done right.]

Over at the US Government IT Dashboard blog, Vivek Kundra (Federal CIO), Robert Carey (Navy CIO) and Vance Hitch (DOJ CIO) write:

the evolving challenges we now face, Federal Information Security Management Act (FISMA) metrics need to be rationalized to focus on outcomes over compliance. Doing so will enable new and actionable insight into agencies’ information and network security postures, possible vulnerabilities and the ability to better protect our federal systems.
(“Moving Beyond Compliance: The Status Quo Is No Longer Acceptable”)

Once apon a time, I was uunet!harvard!bwnmr4!adam. Oh, harvard was probably enough, it was a pretty well known host in the uucp network which carried our email before snmp. I was also harvard!bwnmr4!postmaster which meant that at the end of an era, I moved the lab from copied hosts files to dns, when I became adam@bwnmr4.harvard…wow, there’s still cname for that host. But I digress.

Really, I wanted to talk about a report, passed on by Steven Johnson and Gunnar Peterson, that Vint Cerf said that if he were re-designing the internet, he’d add more authentication.

And really, while I respect Vint a tremendous amount, I’m forced to wonder: Whatchyou talkin’ about Vint?

I hate going off based on a report on Twitter, but I don’t know what the heck a guy that smart could have meant. I mean, he knows that back in the day, people like me could and did give internet accounts to (1) anyone our boss said to and (2) anyone else who wanted them some of this internet stuff and wouldn’t get us in too much trouble. (Hi S! Hi C!) So when he says “more authentication” does that mean inserting “uunet!harvard!bwnmr4!adam” in an IP header? Ensuring your fingerd was patched after Mr. Morris played his little stunt?

But more to the point, authentication is a cost. Setting up and managing authentication information isn’t easy, and even if it were, it certainly isn’t free. Even more expensive than managing the authentication information would be figuring out how to do it. The packet interconnect paper (“A Protocol for Packet Network Intercommunication,” Vint Cerf and Robert Kahn) was published in 1974, and says “These associations need not involve the transmission of data prior to their formation and indeed two associates need not be able to determine that they are associates until they attempt to communicate.” That was before DES (1975), before Diffie-Hellman (1976), Needham-Schroeder (1978) or RSA. I can’t see how to maintain that principle with the technology available at the time.

When setting up a new technology, low cost of entry was a competitive advantage. Doing authentication well is tremendously expensive. I might go so far as to argue that we don’t know how fantastically expensive it is, because we so rarely do it well.

Not getting hung up in easy problems like prioritization or hard ones like authentication, but simply moving packets was what made the internet work. Allowing new associations to be formed, ad-hoc, made for cheap interconnections.

So I remain confused by what he could have meant.

[Update: Vint was kind enough to respond in the comments that he meant the internet of today.]

I remember when Derek Atkins was sending mail to the cypherpunks list, looking for hosts to dedicate to cracking RSA-129. I remember when they announced that “The Magic Words are Squeamish Ossifrage.” How it took 600 people with 1,600 machines months of work and then a Bell Labs supercomputer to work through the data. I had a fun little stroll down memory lane reading about average machines not having more than 16MB of ram, and how they borrowed a server with 2, later 3 900 MB disks. 129 decimal digits fits in 430 bits. The RSA-129 paper concludes:

We conclude that commonly-used 512-bit RSA moduli are vulnerable to any organization prepared to spend a few million dollars
and to wait a few months.

You are the nation’s new cyber czar/shogun/guru. You know you can’t _force _anyone to do jack, therefore you spend your time/energy trying to accomplish what three things via influence, persuasion, shame and force of will?

I think it’s a fascinating question, and posted my answer over at the New School blog.