Signing ephemeral keys

Last Updated: 16 December 2018

In the previous episode, we've seen that the Diffie-Hellman Key Exchange is vulnerable to a man in the middle attack. One possible mitigation was for Alice and Bob to sign their short-term (session) public keys with long-term keys. This requires the presence of a public key infrastructure (PKI). Unfortunatly, getting authenticated Diffie-Hellman right is all but easy. Intuitive, naive constructions have been broken time and again. In this article, we'll explore this issue[K18].

Remember that the main issue with unauthenticated DH was that the short-term public keys \(A\) and \(B\) were not tied to the identity of Alice and Bob. Any active attacker Eve could intercept \(A\), replace it with \(E\), and send it to Bob. Bob, unaware of this subterfuge, would think that \(E\) is Alice's public key, and would happily generate a shared key \(k_{BE}\).

A few prerequisites

We assume that a PKI is available.

Let Alice's identity be \(\mathit{I_{Alice}}\) and Bob's identity be \(\mathit{I_{Bob}}\). There's nothing fancy about identities. It could be their main e-mail address or any other name that uniquely names participants.

Each participant generates a long-term public / private key pair.

Each participant asks the PKI's Certification Authority (CA) to sign their long-term public key, returning a certificate.

Let \(\operatorname{sign}(\mathit{id}, x)\) designate the signature of arbitrary message \(x\) by the long-term private key of the user with the identity \(\mathit{id}\).

Let \(\operatorname{verify}(\mathit{id}, x, \mathit{sig})\) return true iff \(\operatorname{sign}(\mathit{id}, x) = \mathit{sig}\). In practice, calling \(\operatorname{verify}(\mathit{id}, x, \mathit{sig})\) requires access by the verifyer to the certificate containing the long-term public key of the participant with the Identity \(\mathit{id}\). Furthermore, we assume that the verifyer will check that certificate's authenticity with the CA's public key before using the public key in the verification process.

tl;dr.: We assume the usual setup where Alice and Bob have long-term public / private keys associated with their identities, and that the authenticity of the association \((\mathit{id}, \mathit{pubkey})\) can be verified with the help of a Public Key Infrastructure.

A first attempt

Now, we're ready to modify the key exchange phase between Alice and Bob to look like this, in simplified presentation:

The main difference to unauthenticated Diffie-Hellman was that instead of sending only \(A\), and having Bob rely on connection meta-data to assume that \(A\) belongs to Alice, Alice now sends a triple \((\mathit{I_{Alice}}, A, \operatorname{sig_{Alice}}(A))\) to Bob, consisting of her Identity \(\mathit{I_{Alice}}\), her short-term public key \(A\), and of \(\operatorname{sig_{Alice}}(A)\), the signature of \(A\) with her long-term private key (associated with her identity \(\mathit{I_{Alice}}\)).

Bob, getting this triple, will now try to verify with the help of the PKI that \(\operatorname{sig_{Alice}}(A)\) is indeed the signature of \(A\) by the user who claims to be \(\mathit{I_{Alice}}\). In practice, Bob will already have access to the certificate containing the signed long-term public key of \(\mathit{I_{Alice}}\). He may fetch it from some public directory, or get it as part of the dialog with Alice, or he may have a non-expired cached copy of it already. After verifying the authenticity of this certificate, using PKI's CA's public key, Bob will proceed to verify that \(\operatorname{sig_{Alice}}(A)\) is indeed the signature of \(A\), using the long-term public key of \(\mathit{I_{Alice}}\) contained in that certificate.

It this scheme secure? Not really! We'll look at some issues below and in subsequent articles.

Long-term impersonation through ephemeral leakage

Recall that Alice sends the triple \((\mathit{I_{Alice}}, A, \operatorname{sig_{Alice}}(A))\) to Bob over an insecure channel. Little does she know that she just made a terrible mistake.

Enters Neal S. Albert. Neal happens to work for a well-funded state actor with huge, but not necessarily tremendous computing and storage power. Neal, knowing the DH parameters \(p\) and \(g\) started a couple of week ago precomputing a large sequence of pairs \((x, g^x \pmod{p})\) for as many \(x \in {\mathbb Z}_p^{*}\) as he can afford... or rather as his supervior condescendently allowed him to use for his "silly useless tax-payers' dollars wasting pet project." He stores all these pairs in a huge hash table indexed by the second component, i.e. by the ephemeral public key \(g^x \pmod{p}\).

This morning, sometimes around 11:05am, Neal got an alert on his cellphone. He logged into his employer's system and quickly found out that the \(A\) in the just intercepted triple \((\mathit{I_{Alice}}, A, \operatorname{sig_{Alice}}(A))\) actually matched \((a, A)\) in his hash table. Bingo! So he called up his colleague in another department. The call went like this:

Neal: "Hey, Susan, it's me, Neal."

Susan: "Oh, hi Neal. Long time no see. What's up?"

Neal: "Remember that sneaky \(\mathit{I_{Alice}}\)?"

Susan: "Yeah, you bet. That bitch has been sending military secrets to Boris all the time..."

Neal: "She's still at it, you know?"

Susan: "I know, *sigh*"

Neal: "Why didn't you guys catch her?"

Susan: "That's pointless. Boris will easily find another informant. He's pretty good at it, you know? And we won't know who that new traitor is. It's Boris we want to catch red-handed."

Neal: "How?"

Susan: "If we could only lure him into a meeting IRL and hand him some secret documents, that would do it... We could have the Feds bust him with his fingers in the cookie jar. But he'll never trust us enough to meet us..."

Neal: "He would trust Alice..."

Susan: "But we can't impersonate her online: she's very protective of her private keys."

Susan: "I also have a special treat for you. Remember DOC-31.2.323 you've asked me to find?"

("You really have it? Wonderful. Can you attach it now?")

Susan: "No, we need to meet!"

("Okay, Alice, that's fine with me. Can't wait to see you hon.")

Neal couldn't believe his eyes.

Neal: "We've done it! But... how?"

Susan: "That's easy. Alice was really dumb sending \(A\) and \(\operatorname{sig_{Alice}}(A)\) at the same time."

Neal: "Now I get it! With the \(a\) I've just found, you could establish a new session reusing \(A\)..."

Susan: "... and wasn't it nice that Alice offered us \(\operatorname{sig_{Alice}}(A)\) on a silver platter?"

Neal: "Exactly. Had you used some other fake public key, you couldn't have signed it without Alice's long-term private key."

Susan: "I've told you we wouldn't need to raid her appartment to get that key if your \(a\) was good."

Neal: "You were right all along. Hats off, I humbly bow before your wisdom, oh Grandmistress of Deception."

Susan: (laughs heartily)

Next day, in the evening news: "Boris K., a Russian spy was arrested by the FBI in an Italian restaurant downtown. He was carrying top secret documents of a military nature. It is very likely that K. met an informant there, who mysteriously disappeared during the raid."

I know what you're thinking, and you're right: that was uber-cheesy. I couldn't keep my inner awful writer in check.

Anyway, to sum up: by sending \(A\) along with its signature \(\operatorname{sig_{Alice}}(A)\), Alice leaked data. Even though \(A\) was only ephemeral (short-term), Susan was able to long-term impersonate Alice, even well beyond the session. Susan couldn't have forged a signature for a new short-term public key of her making (as in classic man in the middle attack), but by simply reusing \(A\), all she had to do was to reuse \(\operatorname{sig_{Alice}}(A)\) as well that Alice foolishly leaked. However, Susan wouldn't have got very far without a little help by Neal: without Alice's \(A\)-matching ephemeral private key \(a\) provided by Neal, Susan wouldn't have been able to compute \(k_{AB_2}\), the shared secret key matching \(A\) and Boris' new \(B_2\).

The takeaway from this is: Ephemeral leakage should have limited damage, and never have long-term consequences.

So, is this the only issue with sending \((\mathit{I_{Alice}}, A, \operatorname{sig_{Alice}}(A))\)? No, not by a long stretch! But that is the topic of the next article.

How relevant is this scenario in real life?

In the spying thriller above, Neal managed to guess the ephemeral private key \(a\) corresponding to Alice's ephemeral public key \(g^a\) with the help of a huge precomputed table. While that is a viable and possible scenario, provided \(p\) isn't too big, in reality, Neal would subvert Alice's computer and fetch \(a\) right off its memory. But if Neal has free reign over Alice's computer, nothing prevents him from also fetching her long-term private key, and then he could impersonate her until her certificate expires and she has to submit a new public key to the CA for certification. So what's the point?

If Alice were really paranoid, she would use a USB token / smart card / ... to sign stuff. Her long-term private key would never leave the "hardware key", since it's this hardware that does the signing. Neal could fetch \(a\) from RAM, but not Alice's long-term private key from that hardware key... This is analog to the scenario above: \(a\) was more or less easily leaked (here through precomputations), buy Alice's long-term private key wasn't accessible without brute force (raiding her appartment). There are still corner-cases where Neal could remotely trigger Alice's hardware key to sign stuff for him, but this device would normally ask Alice in a very obvious way to confirm such an action. If Alice isn't the kind of computer user who clicks Yes on every prompt, just to get rid of it, Neal's attempt to use that device stops right here.

So this scenario is indeed relevant in real life, especially when users protect their long-term secrets better than their ephemeral secrets.