A Cryptography Engineer’s Perspective on Quantum Computing Timelines
33 points by fkooman
33 points by fkooman
I think this is a well-articulated point, and it's pretty damn important for our industry. I don't have an opinion to offer but I look forward to reading lobste.rs commentary about this.
I spent the evening looking into quantum computing timelines as a non-expert in quantum computing. Here is what I’ve learned:
We currently have machines with ~1,000–1,500 physical qubits at error rates around 10⁻³, and Google’s algorithm requires ~500,000 physical qubits operating coherently together with surface code error correction, yoked qubit storage, magic state cultivation producing ~500K T states per second, and reaction-limited execution at 10μs cycle times — none of which has been demonstrated beyond small-scale proof-of-concept experiments.
Scaling from where we are to where this needs to be isn’t a matter of incremental improvement along a Moore’s Law curve; it requires solving qualitatively new engineering problems in qubit fabrication yield, correlated error suppression across a massive chip (or multi-chip interconnects that don’t exist yet), cryogenic wiring and control electronics for half a million qubits, real-time classical decoding at the required throughput, and sustained coherence of a “primed” quantum state across minutes of wall-clock time — any one of which could prove to be a multi-year bottleneck, and all of which must be solved simultaneously.
Given the above, I just don’t see how we’re going to get to a cryptographically relevant quantum computer by 2030, especially given that we need a ~350× increase in physical qubit count with simultaneously tighter error correlations, an entirely new cryogenic control and wiring architecture to address half a million qubits, real-time decoding infrastructure that doesn’t exist yet, magic state distillation factories operating at industrial throughput, and multi-minute coherent idle times for primed states — and historically, solving even one of these at scale has taken the field the better part of a decade.
“Doesn’t the NSA lie to break our encryption?” No, the NSA has never intentionally jeopardized US national security with a non-NOBUS backdoor
I believe the author when they say ML-KEM and ML-DSA are trustworthy. And I believe the NSA does good things sometimes! But there's a little too much hedging in that statement—maybe they're just being precise, but to my layman's eyes I feel like it doesn't support the author's argument very well.
I can't find the source, but I remember reading an article once that maintained that backward compatibility with legacy crypto while transitioning to PQ (AKA, "crypto-agility") is a bad idea, since it threatens everybody with protocol downgrade attacks. I think of that claim as uncontroversial, but I'd be interested in other perspectives, if there are any.
Depends on the context. If it's a migration path ahead of the old protocol being completely busted, it's fine. In many cases, it's the only realistic path forward. You just need to be prepared to sunset support for the legacy protocol before too late (on both ends), which means you really need a head start.
Even after the prophesized quantum-calypse, there might be ways to do reasonably secure negotiation. For example, maybe you have a non-vulnerable way to obtain some assertions from an external trust anchor such as a CA. Or maybe you preload some information for up-to-date clients, or cache it on first use (similar to SSH known_hosts or browser HSTS state).
Finally, some folks might be more concerned about mass surveillance than about their ISP / the government / the bad guys actively tampering with their traffic, in which case, you could argue that even a vulnerable negotiation scheme is worth something. Although that's a tenuous argument, because all it takes is being on the same public wifi as the bad guy.
If you control all the clients of a protocol, then cryptographic agility doesn't gain you anything, and you can move to hybrid now.
If you don't control the clients, then some measure of "agility" is required in order to migrate them and maintain availability. But that has broad meaning. Agility has been used to allow pluggable algorithms and parameters in a protocol so that clients can negotiate a connection time. Being as flexible as possible means that you can support many different clients over many generations. The problem is that know implementors (developers, or system administrators) need to know Foo Protocol is cryptographically secure (as long as you don't use X, Y ciphers, or the Q parameter of the Z cipher). This can lead to insecure misconfigurations (for sysadmins) or downgrade vulnerabilities (for devs).
I think generally, it is preferred for modern cryptography designs to specify a single, small set of algorithms and their public parameters in a protocol. If you need different algorithms (because of a, specify a different protocol version. This reduces the amount of algorithm negotiation and complexity that leads to vulnerabilities.
https://soatok.blog/2022/08/20/cryptographic-agility-and-superior-alternatives/ is a good reference for this.
Authentication is not like that, and even with draft-ietf-lamps-pq-composite-sigs-15 with its 18 composite key types nearing publication, we’d waste precious time collectively figuring out how to treat these composite keys and how to expose them to users.
Here's the list of the key types. There are so many because of the combinatorial explosion of classical signature algorithms × post-quanatum signature algorithms.
But it's not clear that all those types necessarily need to be supported! e.g. people have been suggesting that RSA-2048 be avoided for awhile now. If we said that the exclusive classical signature algorithm was Ed25519 (or Ed448, if it'd make people feel better to pick a different FIPS-certified signature algorithm, but with >128-bits of security), then we avoid the issue of a million different key types altogether.