I read about STS + variants being insecure in the SIGMA paper, which then proposes SIGMA as a replacement. Are the SIGMA variants still considered secure or are there some other protocol that's currently recommended to use?

4 Answers
4

The SIGMA paper does not describe how a "response message" for SIGMA-I would be implemented. If it was implemented as (for example) $B$ sending $\:\operatorname{MAC}_{K_m}\hspace{-0.02 in}(\text{"ACK"})\:$ to $A$, then that would not actually provide the desired peer awareness property in the case where $\: B = \text{"ACK"} \:$. If || denotes concatenation, then this could be fixed by replacing $\operatorname{MAC}_{K_m}\hspace{-0.02 in}(B)\:$ with $\:\operatorname{MAC}_{K_m}\hspace{-0.01 in}(00\hspace{.02 in}||\hspace{.02 in}B)\:$ and replacing $\:\operatorname{MAC}_{K_m}\hspace{-0.02 in}(A)\:$ with $\:\operatorname{MAC}_{K_m}\hspace{-0.02 in}(10\hspace{.02 in}||\hspace{.02 in}A)$ and implementing the response message as $\:\operatorname{MAC}_{K_m}\hspace{-0.02 in}(110)\;$.

It is safe for $A$ to send an "application layer" message encrypted with $K_s$ along with its last message in the key agreement protocol, as long as one keeps in mind that at this point, $A$ can't be sure that $B$ is aware that $A$ recently initiated a key agreement protocol between them. $\:$ With the fix I described, one could also specify that if $A$ does so, then $A$ will also send $\:\operatorname{MAC}_{K_m}\hspace{-0.01 in}(01\hspace{.02 in}||\hspace{.02 in}A)\:$ instead of $\:\operatorname{MAC}_{K_m}\hspace{-0.02 in}(10\hspace{.02 in}||\hspace{.02 in}A)\;$.

Similarly to but independently of that, one could specify that if $B$ sends an application layer message along with its response message, then $B$ will send $\:\operatorname{MAC}_{K_m}\hspace{-0.02 in}(111)\:$ instead of $\:\operatorname{MAC}_{K_m}\hspace{-0.02 in}(110)\;$.

It would be fine for the parties to "change their mind" about that after MACing the default message; a party "changing their mind" after MACing the alternative message would not harm security but would cause the other party to reject. (The MACs I refer to in the previous sentence are the ones using $K_m$ as the key, not any MACs that may be involved in authenticated encryption of the application layer data.)

PAKE

There is password-authenticated key agreement (PAKE), either using a common reference string (such as this one), or preferably in the plain model (this one). The second of those papers is probably the best alternative to standard PKI authentication. (Also, unless it is "augmented", PAKE is deniable.) One can ask for the authenticated key agreement protocol to satisfy the default definition of deniability. This can be done with just PKI, although I after searching I'm surprised that I wasn't able to find a good example of such protocols. $\:$ One could make such a protocol by having the key-pairs be for signatures but replacing the sending of signatures with the providing of concurrent statistical ZK arguments of knowledge of signatures. $\:$ One would probably want to make computational assumptions and choose the signature scheme and expand the public key so that Stage 2 can be implemented with statistical WI in 3 rounds using a 4-round protocol whose first message is part of the public key.

KRK $\;\;\;$ = $\;\;\;$ PKI $\:$ + $\:$ each party knows a valid private key for each public key associated with it. This allows significantly stronger versions of deniability than can be achieved even with the combination of PKI and a common reference string, although their "construction" of the strongest one, Key Exchange with Incriminating Abort, uses a primitive that I can't find any construction of anywhere.

I suspect that one could construct 11-round KEIA from [semantically secure PKE with perfect completeness] plus non-committing semantically secure PKE. $\:$ Let H be a computationally uniform (2,2)-independent hash family with a large range. $\:$ The public key would have two perfect completeness public encryption keys, a perfectly binding commitment to the bit 0, and perfectly binding commitment to something that is not a hash h in H, but has the same length as those hashes; the private key would have two private decryption keys, the value committed to in the second commitment, and an opening for each of those two commitments. $\:$ In the rest of this paragraph, the "main encryption keys" will be the four perfect completeness public encryption keys of the two parties that are participating in the current execution of KEIA (two held by each party), and for any party, that party "equal-encrypt"s something by encrypting that thing with the main encryption keys, sampling $v$ uniformly from the codomain of $H$'s hashes, sending the ciphertexts and $v$ to the other party, and then proving to the other party with a special sound sigma protocol that the encryptions are of the same message or the first commitment in the other party's public key is actually to a 1 or the second commitment in the other party's public key is actually to a hash $\:h\hspace{-0.02 in}\in\hspace{-0.015 in}H\:$ such that $h$ evaluated at [a structure indicating which ciphertext corresponds to which main encryption key] $\:$ takes the value $v$.

To get an 11-round KEIA protocol from those assumptions, replace the three instances of DREnc with equal-encrypt, and add an arrow at the bottom from left to right for sending $n_R$. $\:$ Each party must delay checking consistency of the decryptions of the ciphertexts that were sent by the other party until the corresponding sigma protocol finishes, and if verifier (of the sigma protocol) did not accept the proof (including if it timed out), then the verifier must not indicate whether or not the plaintexts were consistent.

For that protocol, the key agreed upon has exactly as much deniability as the non-committing PKE, however much that may be. $\:$ Although this would result in more rounds of communication, one could increase the protocol's offline deniability against adversaries who can violate the second part of KRK without being able to violate the PKI part, by replacing the sigma protocols with proofs that have stronger knowledge extraction guarantees.

There are also protocols assuming that the parties are able to check equality of their "short authentication strings" in a way that does not allow the adversary to convince either of them that those strings are equal when those strings are not actually both existing and equal. The protocols of this type that are currently recommended for use (by people other than me) are MANA IV and ZRTP. $\:$ However, I have not managed to find any description of such a protocol that is clear enough to convince me that it uses only a short authentication string, as opposed to a short authenticat$\textit{ed}$ string or a common reference string or a random oracle.

I believe a protocol satisfying that could be constructed by as follows: Each party generates a signature key pair and sends the verification key to the other party, Alice give commits to a random string using the pair of verification keys as the tag in a way that is non-malleable wrt replacement, Bob sends a random string, Alice decommits, the short authentication string is calculated from the xor of those two random strings, and then the parties do standard authenticated key agreement with their signature key pairs. (For these types of protocols, one should probably have the parties use the same signature key pair each time so that one can implement public key pinning. $\:$ However, if one doesn't do that, then one would be able to use one-time signature key pairs instead of standard signature key pairs.)

There have been a large number of so-called authenticated key exchange (AKE) protocol proposals in the literature since the sigma protocols that could be used to replace them. I know of too many to list. They offer various advantages over the sigma protocols, ranging from various stronger security properties (some interesting ones, some which might be of academic interest only, but certainly more than just various forms of deniability) to increased efficiency.

I could mention a large number of later protocols, but they are hard to compare to each other and the landscape is quite confused; they end up being mostly incomparable if you look closely enough. I'm confident that at the moment, there is no objective method for comparing such protocols that everybody agrees on.

I'm afraid that the first person that mentions any of the newer ones will effectively start a flame war :) I'd say we'd have to wait until the dust settles down, but I think most people agree that HMQV is a pretty solid design.

Looks like there are a disagreements discussions regarding the security of HMQV? What are the major newer (controversial) protocols?
–
NuojiJul 8 '13 at 9:19

There has been a lot of debate about the exact relation/improvements of HMQV over MQV, but IMO these discussions do not have significant practical impact. Especially since most standards require that the group element checks are included. This however reduces some of its claimed efficiency gains. These debates mostly concern the provable security claims and/or the precision of the security models.
–
user4621Jul 8 '13 at 11:54

It is highly recommended to avoid trying to design your own. If you cannot use any of the existing recognized protocols, better just use plain signed DHE (i.e., sign the DH ephemerals with your long-term certs).