There are so many issues raised in this paper that it could be (and might be) the topic of many blog posts. The first three paragraphs are today's topic:

Preamble. Most academic cryptographers seem to think that our field is a fun,
deep, and politically neutral game—a set of puzzles involving communicating
parties and notional adversaries. This vision of who we are animates a field
whose work is intellectually impressive and rapidly produced, but also quite
inbred and divorced from real-world concerns. Is this what cryptography should
be like? Is it how we should expend the bulk of our intellectual capital? For me, these questions came to a head with the Snowden disclosures of 2013.
If cryptography’s most basic aim is to enable secure communications, how could
it not be a colossal failure of our field when ordinary people lack even a modicum
of communication privacy when interacting electronically? Yet I soon realized
that most cryptographers didn’t see it this way. Most seemed to feel that the
disclosures didn’t even implicate us cryptographers. I think that they do. So I want to talk about the moral obligations of cryptographers,
and my community as a whole. This is not a topic cryptographers
routinely discuss. In this post-Snowden era, I think it needs to be.

My thoughts:

1) I would add that the Target Breaking, the SONY hack, and the OPM breakin might also show that crypto has been a failure. He doesn't seem to mention those but I think they strengthen his case.

2) Might it be Security that is a colossal failure? Of course, crypto and security go together so it may be hard to disentangle whose failure it is.

3) Might it be that good crypto research has been done but is not being used- the tech transfer problem. He later claims that this would be relevant if crypto worked on the right problems in the
first place.

4) I tend to think he's right. Rather than me telling you why I think he's right, just read his paper.

17 comments:

Any blog post this week on cryptography, privacy, and public policy is incomplete without mentioning the San Bernardino terrorist's cell phone, and the Apple/FBI case going on right now! I'll open a sub-thread. =)

My prediction: The Apple case is going to be argued up to the Supreme Court, and lead (one way or another) to the largest new legal precedents being set (in terms of the role/authority of the U.S. government in encryption/privacy techonologies vs the rights of citizens or companies) since Bernstein vs U.S. ( https://en.wikipedia.org/wiki/Bernstein_v._United_States ). (The Clipper Chip / PGP era controversy was socially impactful by comparison, but never went to court or set legal precedent AFAIK..)

Current precedent is that computer code is protected speech in terms of the First Amendment, and so the government cannot restrict it as such. On the other hand, there is precedent in which the U.S. government has compelled public companies to produce new (e.g. telephone) technology in order to obey search warrants.

The reason the current case especially matters is that -- while it's currently technically feasible for Apple to access the phone -- it's clear that the 'next generation' of tech products is going to aim to be unbreakable, even from the manufacturer's end.

Should it be legal to create a totally secure device/software package (that no one can access, except the two users involved in end-to-end communication)?

I think yes. BUT- it seems the government's argument against Apple could just as well be used to justify banning me having/using/selling such a device, as it would be warrant-proof.

It is not the case that FBI cannot break into the iPhone without Apple's help. They can, but it would be very expansive for FBI. What they want essentially is for Apple to make breaking into iPhones much cheaper for them so they can do so regularly not just on exceptional cases where spending a million dollar to break into an iPhone is justifiable. If it costs a million dollar to break into an iPhone FBI would need very good explanation for spending a million dollar. But if it costs just $100 they would not need a good explanation.

Most of tech vs. national security establishment issues are not about possibility. If NSA or FBI really want to break into a device or monitor some particular person's activities they can do so. The issue is their tools are not scalable.

Why is the FBI asking for mass surveillance access when targeted surveillance is feasible? Why are they not legally required to consult the NSA on breaking into the phone's data, before demanding Apple write code for the government?

What Rogaway addresses is very different from what the above thoughts suggest, and is in fact about the cryptography community, and not about cryptography per se.

Put bluntly, the issue is not whether tech transfer fails, rather how come there is no tech (or very little tech) to transfer! Of course, academia should expand in whichever direction it prefers, disregarding tech transfer. Ultimately, it is very hard to measure what will lead to progress and what won't.

However, the core academic cryptography community has perhaps excessively focused on "puzzles involving communicating parties and notional adversaries" (in Rogaway's own words), while still marketing itself as providing solutions to pressing security problems in front of media, funding agencies, and other fields.

Rogaway makes a case that challenging research directions making a positive contribution do exist, and wonders how come only a small fraction of the academic research community went that way.

Certainly thought provoking, but perhaps hard to appreciate without an overview of the research happening in the broader security community, and of the role theoretical cryptography plays in all of this.

"Of course, academia should expand in whichever direction it prefers, disregarding tech transfer. Ultimately, it is very hard to measure what will lead to progress and what won't."

But this isn't a matter "just" of whether tech transfer should be foremost in the minds of theoretical cryptographers..

Suppose the government is successful in its case against Apple (I take the view that this is an attempt by the government to legally justify a core part of the 'scary' mass surveillance approach, as revealed by Snowden -- thus, the connection to Rogaway's motivations RE: Snowden).

The approximate precedent (in this hypothetical case) is that it is illegal -- ultimately -- to create a "device/software package" that satisfies the standard notion of public-key encryption security, e.g. CPA-security. Somehow, theory has allowed for CPA-security to exist, but the real-world has disallowed it by a technicality (e.g. there exists a real-world actor that must always have a valid/"secret" side channel, by which to distinguish plaintexts).

The national security community seems to have a very different perspective. They seem to have completely given up on security and simply trying to develop offensive capabilities. See this CFR video:https://www.youtube.com/watch?v=Jolz8hwcUaU

I think that a good service that the scientific cryptography community may do to the citizens of the already almost completely KGB-ified western states is to realease a source code, bullet proof for state of the art attacks, code, with a very simple interface:

1. generate a secure (up to state of the art) key.2. encrypt a message with the key so that it is unbreakable (up to state of the art).3. decrypt the message with the key.

That's it: not trying to design clever schemes for distributing the keys, no public keys, which do not work if KGB (sorry, NSA) is not kept away physically (and electromagnetically) from the corrisponding servers. Leave the key distribution issue to the end user.

Of course, this source code needs to be maintained, again by scientists, and depends on their credibility. But I trust scientists (and the open source community) infinitely more than I trust NSA, Pentagon, and other similar institutions.

Who decides the authentic source of 'good' source code? (Who, among the scientists, decides the authoritative 'scientific source code?')

This isn't a trivial matter. NIST is supposed to do this. But sometimes, the process fails by any objective standard: https://www.schneier.com/essays/archives/2007/11/did_nsa_put_a_secret.html

Different groups of scientists will disagree about the relative security of a given algorithm/protocol/model. (Technical controversies exist.) And moreover, (computer) scientists are often not rewarded, academically, for producing and maintaining "excellent-quality" program code. Even in "applied" fields, it's often incumbent on the particular scientist to "decide" to do a good job with coding.

AH- that might be why Rogaway mentions Snowden but not those.However1) Did Snowden's releveations have to do with crypto?2) What did cause Target et al breaches? Failure of Security?Writing the combination next to the safe door? 3) Is there a real diff between Snowden and Target aside fromthe fact that in Snowden its the Gov spying and inTarget its someone else spying?