National Defense, Encryption, and Backdoors

In the aftermath of 2014, often dubbed the Year of the Data Breach, security issues have risen to the forefront. Far from being an isolated problem, with single incidents here and there, cybersecurity has even sparked national debates around measures needed to keep entire countries safe.

This blog post offers a technical and non-political analysis of some issues surrounding encryption and its use in fighting terrorism. It will estimate how future policy proposals could affect security based on the historical outcomes of similar policies. It will also discuss the human factor and try to predict how individuals will adjust their behavior.

Let’s begin with some possible scenarios and how it would affect the multiple parties involved.

Scenario 1: Ban all encryption

An extreme scenario and highly unlikely, but banning encryption would constitute a huge coup for cybercriminals. The following effects would increase in likelihood: personal data would be stolen at an alarming rate; identity theft would be rampant; cryptovirology data ransom schemes would proliferate; blackmail would become a growth industry; and a massive blow would be dealt to the tech industry, as people would no longer trust their products.

Would this help stop terrorists?

It would absolutely hamper those trying to orchestrate and carry out a coordinated terrorist attack without being detected by anti-terrorism agencies. Terrorists would have to use pre-digital tools to communicate, but terrorists never really needed digital tools to commit atrocities anyway.

Scenario 2: Ban all encryption without a government-exploitable backdoor or key escrow

This radical ban would have the effect of returning the internet to a circa-1993 state, where cypherpunks played cat-and-mouse with cyber law enforcement. Printed books of source code would be widely trafficked in the USA; the books would be illegal to export, but exportation would still occur. Export controls for cryptographic software would be tightened. Laptop inspections to check for illegal software at international borders would increase. Research into steganographic techniques to fool auditors would become widespread and potentially very successful.

The most interesting effect of this policy would be on researchers in security and cryptography. They’d be faced with some tough choices; they can either:

Leave their security/cryptography careers and
a. Work in Signals Intelligence (SigInt) for the same government that just eliminated their jobs
b. Start over in another field altogether, leaving behind years of effort and expertise

Keep their current careers and
a. Move to another country where academic security research is legal
b. Become a cybercriminal who develops and disseminates strong encryption software clandestinely
c. Become a cybercriminal who develops/breaks crypto for rogue states and organized crime syndicates

Even if only one percent of the crypto community chose option 2c, the technical capabilities of those bad actors would go from ‘merely good’ to ‘phenomenal’ essentially overnight.

Would this help stop terrorists?

It’s hard to say. It would certainly make it more difficult to procure and use strong encryption software, but that software exists now and one can’t exactly un-invent it. Dedicated terrorists would still continue to use strong encryption.

Scenario 3: Require tech companies to comply with surveillance laws and provide plaintext data to governments when they request it

This would likely be implemented as private sector key escrow akin to the Clipper Chip. This could include escrow of CA signing keys to allow targeted TLS eavesdropping. A more long-term project might be building a new public-key infrastructure using identity-based encryption (IBE). Key escrow is, in some sense, built in to IBE. The key escrow scheme costs companies lots of money: hardware, ops, sysadmins, and massive expenditures on security to minimize blowback from angry customers. If archiving all plaintext were required, it would cost even more money.

Such a policy would most likely have the second-order effect of accelerating the removal of control of the Internet’s core infrastructure (ICANN/root DNS entries in particular) from the U.S. The rest of the world would not trust the U.S. government to be responsible stewards of the Internet. This would cause a balkanization of the Internet, a system whose main strength is its global reach.

Cyber-attacks against companies would become more damaging than they already are, because the data that’s leaked could include keys necessary to read all communications. If cybercriminals exfiltrate plaintext conversations, they would mount a massive and hugely successful campaign of blackmail against people whose private information has been stolen. Cybercriminals from the outside wouldn’t be the only concern – malicious actors from the inside would also pose a risk. Think of your sysadmins and other privileged users who could also leak data (a la Edward Snowden).

Would this help stop terrorists?

Not likely, because terrorists would be able to resort to open-source encryption tools. Terrorists and criminals could fund their actions through blackmail.

This is an interesting twist on the previous scenario. It’s certainly cheaper than requiring total key escrow or bulk plaintext data storage. The cost for governments and companies is dominated by the cost of designing and implementing the protocols. This option would probably not be ‘cheap’ in an absolute sense, and it would be time-consuming because cryptographers are a notoriously argumentative bunch. Once the upfront work is done, though, your SigInt agencies would be off to the races.

One glaring problem would as follows: depending on what part of the protocol is backdoored and how, this could make all communication less secure. Say, for example, every company was required to use only RC4 as a symmetric cipher in all transport-layer encryption. The weaknesses in RC4 aren’t only exploitable by someone who knows a piece of secret information, so any motivated party could perform the same eavesdropping as the SigInt agency. Even if the public doesn’t know about the attack when the scheme is implemented, it’s likely the same attack would be discovered independently a few years later.

Let’s instead assume the backdoor is asymmetric, i.e. to exploit it one needs a piece of secret information. This would look like using TLS with the NIST-standard DUAL-EC-DRBG random number generator. The NSA can passively eavesdrop on you because they know the secret. But someone who doesn’t know the secret can only eavesdrop if they solve a mathematical problem that’s conjectured to be nearly impossible with current techniques. That last clause is an important qualifier here. Attacks only get better and computers only get faster, so an attacker that figures out how to solve the problem in the future can eavesdrop on all traffic it has captured since the beginning of the protocol’s use. If the breaking attack is kept secret, the attacker would be able to eavesdrop on new traffic too. It could steal priceless technical schematics from the Googles of the world, weapons design documents from the Lockheed Martins of the world, sensitive stock info from the NYSE, credit card numbers, etc.

An asymmetric backdoor would also vulnerable to espionage. When the backdoored protocols come online, the digital data comprising the backdoor would instantly become the most widely-sought binary string in existence. In engineering terms, it would be a ‘single point of failure.’ A defector who has this backdoor would be basically priceless.

Would this help stop terrorists?

Terrorists who use and trust the default encryption for their communication would be vulnerable. More adept terrorists would use open-source encryption tools that have non-backdoored protocols.

A ‘backdoor’ in this context means a method for a signals intelligence agency to gain access to a device they suspect is being used for terrorist activity. After they have access, they could do a number of things at different levels of the software stack, from subversion/eavesdropping at the bare metal of the baseband cellular hardware all the way to exfiltration of instant messages via a software hook in iMessage.

Ideally, any backdoor would be protected by some kind of authentication that only allows SigInt agencies to exploit it. This could be implemented by verifying signatures on firmware updates, for example. The agency’s public key would be hard-coded as a trusted key for updates, and backdoored firmware or other software could be distributed, with the company’s cooperation, using normal channels.

If, for whatever reason, strong cryptographic authentication is NOT used to verify backdoor exploitation, the backdoor would need to be protected by ‘security through obscurity.’ This basically means that anybody who knows of the backdoor’s existence can exploit it.

The problems with the ‘security-through-obscurity’ approach are probably too obvious to mention, so I’ll instead focus on the setting where some authentication exists to control access to the backdoor. In this setting, every mobile device manufactured for use in the implementing country must contain the backdoor. This would make the key that signs the firmware an extremely attractive target for nation-state-level espionage.

Additionally, this would make the financial value of a defector with access to the signing key very high. Even someone who just works on developing the backdoored firmware (who doesn’t have access to the key) would become a valuable defector because of their extensive knowledge of the backdoors and their auth mechanisms.

Would this help stop terrorists?

Yes, this would help law enforcement catch very inept terrorists who – and this is a crucial point – plan their attacks using mobile devices manufactured for the country they’re trying to attack. Also, the existence of the backdoors must be an absolute secret, because as soon as there is even a rumor of a backdoor, all the terrorists would use other kinds of communication. The increased risk of all mobile devices to covert exploitation by a motivated actor means that certain kinds of (cyber)terrorist attacks would become easier to execute.

Download Our Encryption Schemes Cheatsheet

Learn about the functionality and security trade-offs of different encryption schemes.