NIST did something clever this time, and Round 3 was separated into two groups: Finalists and Alternative Candidates.
Finalists are algorithms that NIST (and the majority of the cryptographers involved in NIST’s decisionmaking) believe are production-ready and worth standardizing immediately.
Meanwhile, Alternative Candidates are algorithms that NIST believes need a few more years of cryptanalysis and research before feeling confident about, for one reason or another.
I’m sure everyone in cryptography has their own opinions about the Round 3 selection, but I’d like to analyze the Round 3 finalists from a very opinionated, real-world perspective.
I’ll look at the alternative candidates in a future post.
I will be evaluating Key Encapsulation algorithms based on real world operational requirements–rather than theoretical notions of security–for public key encryption and key encapsulation, namely:
- Feasibility to integrate with existing transport-layer protocols (TLS, Noise, etc.)
- Support for offline public-key encryption (a.k.a. “sealing”) APIs in application-layer cryptography.
This means you can encrypt with an online public key, but decryption requires a secret key that is kept offline and only accessed when needed.
I will be evaluating Digital Signature algorithms based on the following criteria:
- Statelessness: Can the algorithm be safely used in a virtual machine that gets rolled back?
- Bandwidth: Are signatures reasonably small?
Some people might object to this criteria, but this post is ultimately my opinion, and those are the criteria I care about the most.
Post-Quantum Key Encapsulation Round 3 Finalists
Classic McEliece is based on error-correcting codes.
Classic McEliece has large public keys (ranging from 261120 bytes to 1357824 bytes in the Round 2 submission) but short ciphertexts (128 to 240 bytes). This makes it difficult to implement Classic McEliece in protocols like TLS (since it will require a minimum of 4 TCP packets just to transmit a public key).
Classic McEliece does not have decryption failures, making it suitable for offline public-key encryption. However, the enormous public key may hinder adoption here too.
CRYSTALS (Cryptographic Suite for Algebraic Lattices) encompasses two proposals: Kyber (key encapsulation) and Dilithium (signatures).
Kyber public keys range from 800 bytes (for Kyber-512) to 1568 bytes (for Kyber-1024) and ciphertexts range from 736 bytes to 1568 bytes, which means that Kyber can be easily used in protocols like TLS.
Kyber decryptions have a failure probability of about 1 in 2^160. While this probability is unlikely to ever occur in the real world, any nonzero failure probability baked into the algorithm does mean that encrypting a symmetric key with Kyber, strictly speaking, can prevent you from decrypting the symmetric key successfully. This makes it difficult to recommend Kyber in confidence for offline-decryption protocols, since there is no opportunity to recover from the (however improbable) error.
(This isn’t an argument that 2^-160 is too high. Any nonzero probability baked into any algorithm is still nonzero, and therefore irks me.)
NTRU is another lattice-based scheme, and one of the oldest post-quantum lattice cryptosystems ever developed. (It’s so old that the patents have all expired.)
Like Kyber, NTRU offers small public keys (699 bytes for ntruhps2048509 on the low end, 1230 bytes for ntruhps4096821on the high end), making NTRU suitable for protocols like TLS that require small-ish public keys.
Like Kyber, NTRU boasts a decryption failure rate less often than 1 in 2^100 for all parameter sets. (The precise failure rates are not given in the NTRU specification document.) Strictly speaking, this might never be a practical concern, but it is still technically possible to encrypt something with NTRU that cannot be decrypted. This makes it difficult to recommend NTRU for this use case.
Update (2020-10-06): Paulo Barreto emailed me with a correction: NTRU’s submission for Round 2 boasts zero decryption failures.
SABER is another lattice-based key encapsulation algorithm. SABER’s security is based on a different problem than Kyber and NTRU. However, SABER gets bonus points for calling its lightweight mode LightSABER and using a lightsaber in their website’s design.
SABER offers small public keys (672 to 1312 bytes) and ciphertexts (736 to 1472 bytes). TLS should have no problem with SABER.
As with the other lattice-based proposals, the decryption failure probability is nonzero (2^-120 to 2^-165 for the SABER variants).
Round 3 Key Encapsulation Finalist Summary
If your only goal for post-quantum cryptography was to design primitives that can be used in online modes like TLS, then any of the lattice based proposals (Kyber, NTRU, SABER) are great.
For offline public-key encryption,
all three of those proposals carry a nonzero risk of decryption failure, which can only really be mitigated by encrypting the same session key multiple times and storing multiple ciphertexts. (Update: See below.) This failure risk is already negligible (cosmic rays are more likely), but offline protocols have no recovery mechanism (whereas online protocols can just re-initiate a key exchange and call it a day).
One might argue that the probability is negligible (in fact, most cryptographers agree that a failure probability below 1 in 2^120 isn’t worth thinking about, since it will never happen in the real world), but I contend that a nonzero risk means they cannot be used in sealing APIs for general purpose cryptography libraries, since the developers of that library cannot be responsible for the decryption failure risk calculus for all of its users.
That is to say: In my humble opinion, in order to safely implement offline decryption, the algorithmic failure probability must be zero. Any nonzero algorithmic failure rate makes me grumpy (even if there’s no real-world, practical risk).
You might be tempted to just use Classic McEliece for sealing APIs, but since the public keys are enormous, I don’t see this being an option for constrained environments (i.e. embedded devices).
In short, I think the NIST PQC Round 3 finalists completely missed a use case that’s important to me, but apparently not important to NIST. (Offline decryption with a long-lived keypair.) Here’s hoping one or more of the non-lattice key encapsulation algorithms reaches maturity so we’re not caught in a catch-22.
Update (2020-10-06): I was mistaken about NTRU on two fronts:
- I had overlooked NTRU HRSS701 (as Bo-Yin Yang points out in the comments section).
- Round 2 of the NTRU NIST submission has a failure probability of zero (as Paulo Barreto informed me via email).
Given that NTRU is therefore suitable for offline public-key encryption (or, more to the point, offline HPKE), my above complaints about missed use-cases are therefore invalid. (However, for the sake of transparency, I’m only retracting them, not erasing them.)
Post-Quantum Digital Signature Round 3 Finalists
Dilithium is the signature counterpart to Kyber.
Dilithium is stateless. Public keys range between 1184 and 1760 bytes. Signatures range between 2044 bytes and 3366 bytes.
Nothing to complain about here.
FALCON is difficult to implement, but it offers great performance and compact public keys (897 to 1793 bytes) and small signatures (~657 to ~1273 bytes). According to the Falcon website, Falcon-512 is about five times as fast as 2048-bit RSA.
Did I mention it’s stateless?
Rainbow is not a lattice-based scheme, in case you were sick of those. Instead it’s multivariate cryptography.
Rainbow public keys are somewhat large (149 KB to 1705.5 KB, although you can get 58.1 KB to 491.9 KB if you use compressed public keys). However, it boasts short signatures (64 bytes to 204 bytes).
Unlike the other schemes, the specification for Rainbow doesn’t spend a lot of time on introducing it in words. Instead, the paper just dives into the algorithm details without explaining “why?”.
Round 3 Digital Signature Finalist Summary
If you’re going to implement Kyber, it makes sense to also use Dilithium for signatures. Otherwise, FALCON’s performance is hard to beat.
Rainbow is interesting but requires large public keys and needs better documentation.
The Round 3 key encapsulation finalists are great for online encryption, but I don’t feel all warm and fuzzy about algorithms that possess an inherent decryption failure probability, which several of them do. McEliece’s enormous key and ciphertext sizes make it less attractive than the lattice-based candidates.
The Round 3 digital signature finalists are solid. I have a slight preference for FALCON but Dilithium looks great. I don’t know enough about Rainbow to evaluate it.
If I had to make any prediction about the PQC selection, it would be (in order of most-to-least likely to be selected, assuming anyone at NIST shares my priorities):
- Key encapsulation
- Classical McEliece
- Digital signatures
(Excluding the alternative candidates, which merit a deeper analysis.)
Header art by Riley.