Round 3 will see significant narrowing again, based on the relative strength of the algorithms against one another, as well as the performance of the algorithms on current devices.

Here we talk about each of the candidates for asymmetric algorithms, which are used in establishing a secure connection between a client and server. Even today, asymmetric handshakes are slow and asymmetric encryption is not used for doing anything but passing a secret value between a client and server. Once the shared secret value is passed, the connection switches to a much faster symmetric cipher like AES or ChaCha.

So without further ado, here are the candidates and the broadest strokes on what they are all about: (This is submissions A – L, expect more next week!)

BIG QUAKE

BIG QUAKE was eliminated in round 1.

BIKE – Bit Flipping Key Encapsulation

BIKE is actually three separate quasi-cyclic algorithms that are similar but have important small differences. All three are improvements on previous work (Niederreiter and McEliece). These algorithms require ephemeral data that must be thrown away at the end of each session, giving the algorithm forward secrecy.

One of the interesting properties of BIKE is that that encrypted data does have a signature that is recognizable, meaning that equipment could be designed to detect BIKE encryption traveling through networks and throttle or block it. While BIKE would work well for things like HTTPS, it would be less suited to be used in things like privacy networks (unless of course it became widely deployed, which it could then be used in things like Domain Fronting for privacy networks).

CFPKM

CFPKM was eliminated in round 1.

Classic McEliece

Classic McEliece is largely based on one of the oldest cryptosystems designed for computers. It was originally developed in 1978 by Robert McEliece whose labor has withstood 40 years of public scrutiny. This design is largely unchanged from the original, only with the values scaled up to withstand the advances of modern computing. The system combines using large systems of equations with a method of inserting (and later correcting on decryption) random errors into the data.

McEliece also has the benefit of being computationally faster than RSA, but the complex systems of equations make for a large handshake (300+ KB), which is why it is rarely used today. As network speeds increase over time, this becomes less and less of an issue. In the time of 56K modems, this mattered more.

The largest weakness that has been discovered for McEliece is Structural Decoding, but it can only be applied if the initial values for McEliece are not carefully chosen. These are usually problematic in modified versions of McEliece that attempt to make the public key sizes smaller. (Example) The modern implementation accounts for this.

It has some notable improvements to the classic LWE implementation, namely using the same algorithm for the distribution as the noise that is introduced, and by using a square instead of a rectangular matrix. It is further improved by using polynomial rings (similar to the Module-LWE construction) rather than integers to simplify some of the internal math operations.

Interestingly, the KYBER paper recommends combining KYBER with something like ECDH to gain both the classical computing benefits as well as the quantum resistance benefits of KYBER. This, however, would add a significant computing penalty and further increase the size of the key exchange. It isn’t clear whether this recommendation carriers over to a final implementation of KYBER or if this is a temporary recommendation based on prudence and the nature of the ongoing research and development of the algorithm.

The algorithm designers have focused on simplicity and staying within the bounds of current well-researched design. This means that they have given up optimizations in lieu of a (presumably) safer design. The overall design is small and simple, requiring less than 300 lines of code, meaning that implementations are easy to integrate into a project and easy to review for security. Additionally, the design is constant-time out of the box, which is a leg-up on other NIST candidates that will need to design a constant-time implementation if their project continues to advance. Constant-time implementation is important because it prevents timing based side-channel attacks.

It also features two different sets of implementations, one that is faster on AES accelerated hardware, and one that is faster on hardware that lacks acceleration.

Interestingly, the project only targets AES-128 and AES-192 equivalent strengths, opting not to create a mode that can claim AES-256 equivalent strength vs both classical and quantum attacks.

GeMSS – Great Multivariate Short Signature

GeMSS was eliminated in round 1.

Giophantus

Giophantus was eliminated in round 1.

Gravity-SPHINCS

Gravity-SPHINCS was eliminated in round 1.

Gui

Gui was eliminated in round 1.

HILA5

HILA5 was eliminated in round 1, but components of it may be going to the merged ROUND5 project (combines HILA5 and ROUND 2).

HiMQ-3

HiMQ-3 was eliminated in round 1.

HK17

HK17 was withdrawn from the competition by the team.

HQC – Hamming Quasi-Cyclic

HQC is a Quasi-Cyclic algorithm that is similar in nature to BIKE. It introduces noise (errors) into the scheme which are corrected during decryption. The performance metrics and security proofs closely resemble BIKE, and it similarly does not look like random data when deployed.

HQC requires ephemeral data that must be thrown away at the end of each session, giving the algorithm forward secrecy.

In it’s strongest mode, it (reportedly) provides approximately AES-256 equivalent strength with key sizes of about 18KB, making the size of the full key exchange much smaller than a lattice-based handshake. However, it is possible with this handshake scheme for decryption to fail, causing the entire process to need to be repeated. This means doubled round trips between the client and server, doubled bandwidth usage, and double the computation being performed. Repeat failures could be expensive.

Also, it may be hard to prevent denial-of-service attacks from arising by intentionally initiating handshakes with too many errors, as an automated system cannot simply block IPs that send “bad” handshakes until a sufficient number of bad ones outs them as a bad actor. The system would continue processing bad handshakes while assuming that repeat tries are normal. This means a botnet could do damage to a web system employing this algorithm by leveraging this and overwhelming the CPU of the host with bad requests. This is something that all algorithms that introduce noise must face, not just HQC.

Derek is a cryptographer, security expert and privacy activist. He has twelve years of security experience and six years of experience designing and implementing privacy systems. He founded the Open Source Technology Improvement Fund (OSTIF) which focuses on creating and improving open-source security solutions through auditing, bug bounties, and resource gathering and management.