How Quantum Computers affect Cryptography and Blockchain, and Solutions

   crypto zk quantum

I think this is a good middle ground between the oversimplified layman-oriented quantum news articles, and the hyper academic quantum computing papers that are hard for a cryptographer to parse. As the field rapidly evolves, things here may become outdated or wrong. Feel free to leave thoughts/comments/corrections on the hackmd draft of this post! An old version was previously also posted on zkresearch and a version that explains how to migrate Ethereum via native STARK AA was posted on ethresearch. The Ethereum solution was built by Aditya here, with the first ever STARK proof natively verified on Ethereum without aggregation (that we know of!)! Thanks to Aram Harrow, Krishanu Sankar, and Lev Stambler for comments and discussions – all errors are mine, not theirs.

What are the powers of a quantum adversary?

  • There are a couple key algorithms here, including Shors and Grovers. The main thing that they can do is prime factorize and take discrete log. They cannot help undo hashes (as far as we know).
  • Specifically, given a public key, they can derive the private key. This is what leads breaking back-secrecy of any deterministic function of a secret key, such as any zk ECDSA nullifier scheme.

What happens to blockchains?

  • For Bitcoin, addresses need to have at least one public signature for people to know the public key that corresponds to their address (usually address = keccak_hash(pk)[0:40]). I present a method near the end of this article for people to keep their Bitcoin secure on the existing blockchain by hiding both secret keys and public keys.
  • For Ethereum, I also present a method at the end of this article for people to keep funds safe and continue the chain, with only minor modifications to signature validation in consensus. In the worst case, Ethereum could transition to a completely new keypair set because you can merely have all accounts sign the public key of their new account and submit it to, a migration smart contract, which will then hardfork to move everyone’s Eth to the more secure keypair set. Smart contracts do not have public keys, only addresses (recall that even a quantum computer cannot undo that hash), so funds are safe.
  • Before we understand these, we need to first understand what power precisely quantum adversaries have.

What is going on with annealing vs qubit computers, the different quantum computing paradigms?

  • There are two major quantum computing paradigms: quantum annealing and quantum computers.
    • Quantum annealing involves analog superposition across all of the qubits, which slowly ‘anneals’ to an approximate solution.
    • Pure quantum computers have superposition across only across small sets of qubits, that comprise quickly-changing discrete gates, but can thus calculate across all the qubits and have intermediate error correction.
  • It’s a lot easier to get impressive-seeming qubit counts (like 5000) on quantum annealing computers (DWAVE for instance), but they require far more bits for the same task, are usually less efficient, and cannot be error corrected as easily for hard tasks (no strong theoretical results even exist yet as of 2022).
  • Pure quantum computers are the ones where you’ve heard excitement over recently factored numbers like 35 or 48 bit numbers as well as Google’s Willow work – historically, these have had huge problems with noise (and some think an existential upper bound on the number of qubits due to this noise). Google’s recent Willow work shows it’s possible to resolve many of the main noise issues here, which is very exciting. The signal qubit counts are much lower, in the few hundreds. IBM has a goal of enough physical qubits for 2000 logical qubits by 2033, probably enough to break RSA and maybe enough to arguably break ECDSA. This 2033 prediction is the best prediction we have right now, since IBM has been extremely accurate on their qubit roadmap for the last 5+ years.
    • Different types of qubits entertain different error correcting codes that may be more efficient only at high qubit counts however, so after an inflection point, other types of qubits might (with Willow: can) end up accelerating progress unexpectedly – this will look like feasibility with lower qubit counts, but putting such research into production at a large scale will still take time.

What do different algorithms like factorization, discrete log, or un-hashing look like on quantum computers?

  • Annealing bounds:
    • Quantum annealing can minimize functions. For instance, to solve prime factorization, they minimize (n - pq) over the bits of n, p, and q: this ends up taking about $\frac14 \log^2(n)$ qubits to prime factorize n: 2018 paper – however, it takes time likely more than O(poly(log n)) and isn’t practical for that reason. For instance, it mentions that factoring RSA-768 will take 147,456 qubits. This paper demos how the DWAVE 2048 qubit computer could factorize 376289 accounting for noise, but this scales poorly – crude extrapolation predicts the same algorithm would take closer to billions of qubits for RSA.
    • Discrete log to factorize n (with log(n) bits), from a 2021 paper shows about $2\log^2(n)$ qubits needed on annealing based systems, although they ran into practical connectivity issues past n = 6 bits.
    • In fact, it’s likely that bigger discrete log is impossible: this 2013 paper shows that the Hamiltonian makes it very hard to convert physical qubits to logical qubits. Overall, due to time issues and absurd qubit counts for critical problems, this is not a likely route forwards for quantum computing in the long term.
  • Quantum computer bounds:
    • On actual quantum computers, the bound for simple prime field discrete log is around $3n + 0.002n \log n$ signal qubits, where n is the number of bits (n=256 for ECDSA): 2021 paper – again, this doesn’t consider the noise overhead. With noise, Craig Gidney and Martin Ekera calculate that n = 2048 bit discrete log (enough to break 2048 bit RSA) will take 20 million noisy physical qubits. It doesn’t scale that cleanly though – they use surface error correcting codes, which are likely not going to scale that high. Long before this point, we expect newer qubits with better error correcting codes to dominate and reduce this count.
    • That same paper says that ’lattice surgery’ is a process by which 1 logical qubit can be covered by $2(d+1)^2$ physical qubits – broadly, this paper is the place to start for understanding this space.
    • Newer algorithms have shown that elliptic curve discrete log on a curve like secp256k1 is a bit harder, closer to $9n$ from this 2017 paper. Past bounds closer to $6n$ don’t explicitly describe how to do arithmetic on elliptic curves and merely provided a lower bound 2008 paper.
    • Again, these are numbers for signal qubits without noise, and noise qubits add several orders of magnitude more qubits than this, so perhaps these initial estimations are not even relevant – perhaps one should even omit the constant factors via asymptotic notation here to better communicate that.
  • Intuitively, why is a hash function hard for any quantum computer?
    • If you write a hash function as a polynomial in the bits of the input, the resulting function has a degree that is far too high for a quantum adversary to reverse. Specifically, root finding on standard quantum computers takes $O(n \log(n))$ time on $\log (n)$ qubits, where n is the degree of the polynomial, 2015 paper. While the qubit count may be within imagination, this time is absolutely infeasible (degrees of hash functions expressed as polynomials look like $2^{16000}$ ). Of course, future specific quantum algorithms might provide some improvement, but this seems like a reasonable first guess. While SHA likely has this security, not all hash functions have this guarantee – some based on discrete log (like Poseidon) may be easier to break, and there are new hash functions specifically designed to be easy for quantum computers.

What is a reasonable timeline to expect ECDSA on secp256k1 to be broken?

  • It seems that expert consensus varies from 2030-2050 (or even never). One popular slowdown argument is that it may take longer to get there because of the valley of death of few applications between a few dozen qubits and a few hundred thousand. There is utility on the small end for theoreticians, and utility on the high end for cryptography, but very little proven intermediate use for qubit counts in the middle, and thus makes ROI for funding much worse. I’m not totally convinced – several billion government dollars have still been invested over the past few years, and it seems that several companies (like IBM) are hellbent on getting there regardless of applications.
  • As superconducting has entered the zeitgeist, I hope that some sort of longer term research priority can be given to that. If we have super conducting, we should expect this timeline to decrease – qubits themselves rely on superconductivity to reduce decoherence rates and thus noise. The quantum state of a superconducting qubit is stored in the collective excitation of many Cooper pairs moving in a synchronized wave-like pattern. Superconductivity enables this coordinated motion over macroscopic distances. Manufacturing will become easier since we won’t require room-sized freezers for the superconductors and can expect more connections between q-bits to help error tolerance, but we would still be blocked on better quantum error correction, qubit coherence time, and developing useful quantum algorithms.
  • Most current “business” use cases of quantum seem to basically be snake oil to me, and are feasible just fine if not faster on classical computers. There are some new promising ideas, like quantum random sampling in 2022 demonstrating advantage over classical random sampling, even on lower qubit counts. Sampling could be helpful for both simulations of complex processes and machine learning.
  • IBM has been surprisingly accurate on it’s timeline for qubit computers – again, these are signal + noise qubits, so the actual signal qubit count is substantially less than the number you see, though the extent to which this is the case depends on the specific algorithm.

What parts of zero knowledge exactly are broken?

  • tl;dr past secrecy of quantum-proof computations (i.e. preimages of hashes) are ok, SNARK soundness is not.
  • There is a key distinction between statistical and computational zero knowledge (and perfect zk, but that’s impractical) – statistical zero knowledge means that no infinite compute verifier can distinguish between distributions, computational means that no polynomial verifier can distinguish between distributions.
  • groth16 (and most proof systems we know in production right now) are perfect zk: paper, a subset of statistically zk proof systems. This means that even a quantum adversary with access to several past proofs, cannot break past zero knowledge or uncover your secret information.
  • However, because they can take discrete log, they can derive the toxic waste from just the public signals of any trusted setup ceremony. Thus, they can fake any ZK-SNARK – we expect that any current verifier deployed on-chain would have time to migrate to a quantum-resistant proof system prior to this scheme being live. - Similarly, they can derive the discrete logs of the signals used to make IPA commitments hiding, and thus break hiding on IPA commitments. STARKs are still secure though, since they rely on hashing.
  • In fact, this can be generalized – the reason quantum breaks soundness but not secrecy is that there is a fundamental tradeoff here with zk vs soundness of proofs: this fairly short paper proves you can either have statistical zero knowledge or statistical soundness, but not both. In practice, almost all of our proof systems opt for perfect zk and computational soundness, so quantum computers can fake proofs but past secrets are still secret.

Quantum Resistant Bitcoin Keypairs

  • Contrary to popular belief, your Bitcoin funds aren’t immediately screwed – it turns out that Bitcoin has a sort of “accidental” quantum mitigation built in for active users (via P2PKH). I’ll explain the general impact on Bitcion first, then how you can use this trick to protect your coins today, then how you can hardfork to stop the Satoshi coin stealing problem.
  • Quantum computers with Grovers algorithm get you a square factor of efficiency on hashing, meaning the chain will go a lot faster temporarily – however, the chain will eventually adjust difficulty so each block is harder, so that the only competitive miners are quantum computers. This will increase centralization temporarily as multiple quantum computers gear up to compete.
  • This is how you secure your funds. Bitcoin transactions are UTXOs – every time you send a transaction, you reveal your public key for the signature verification (and to a quantum computer, that also gives away your private key). But the important note here is that receiving a transaction to an address (which is the hash of your public key) does not leak your public key until that address spends the money once. Thus, if you send all of your money on Bitcoin to a fresh address, that address will be quantum resistant until transactions are sent from it.
  • So how do you send transactions? As soon as a quantum computer sees a signature, they can theoretically start breaking it. Once they’ve broken it, all of your UTXOs are insecure. Thus, you must spend all of your money every time you act on Bitcoin – it will no longer be possible to spend, say, 5 BTC of your 10 BTC and send the rest of the 5 BTC (minus the miners fee) as a UTXO to yourself, as it currently works. One useful hardfork would be for unspent UTXO to go to a different secret key automatically. Without this or a similar hardfork, the only safe wallet structure would have people only receive funds to fresh addresses, because receiving two transactions to the same address means you can never safely separate them again.
  • But how can you send a transaction safely at all? Given that the block time averages to 10 minutes, as long as a quantum computer takes at least 10 minutes to do the ECDSA discrete log calculation, your transaction can safely be included on-chain before a quantum computer can break it and steal it first. They would just break discrete log, and send their own signature sending the money to themselves.
  • Once we pass the threshold where quantum computers can break ECDSA discrete log in less than 10 minutes, you can’t send a tx into the public mempool (transaction queue) anymore, or else it might be broken by a quantum adversary faster than a block inclusion. At this point, you’d have to privately send your transaction to a quantum computer who you trust will not steal the transaction. We expect this would lead to a slightly more centralized, Flashbots-esque proliferation of private, trusted mempools, secured by human trust and not by the guarantees of Bitcoin. Regardless, since side mempools don’t officially break the Bitcoin protocol, this would still allow the chain to continue.
  • While it would be more centralized to people who can afford to build quantum computers, your funds would still always be 1) safe from double spending and 2) safe from hacks if you move them to a fresh wallet with a fresh private key and don’t spend them.
  • But wait you ask – can’t the coins of Satoshi (and other dead users) still be stolen via quantum computers? If we’re willing to hardfork, can we do better? It turns out you can, and you can in fact skip all of the above issues with:

Quantum Resistant ECDSA ‘Signatures’ via ZK Seed Phrases

  • I’ll talk about this in the context of Ethereum because Account Abstraction exists as a native (enough) concept, but it’s a general solution that can be applied to nearly any blockchain including Bitcoin – as long as users still have their original seed phrases.
  • So far, I’ve only seen solutions on ethresearch to quantum proof Ethereum via new keypair types. Hardforking Ethereum to a quantum resistant keypair would break every single wallet and piece of key-related infra, and require a full rearchitecture of Ethereum from the ground up. I think there’s a more robust way to quantum-proof Ethereum on the existing ECDSA on secp256k1. The reason it’s not currently quantum proof is that after sending a tx, your public key is revealed (i.e. the hash preimage of your address), so you can take the discrete log efficiently with a quantum computer and get someone’s secret key. If there was a way to send txs that didn’t reveal the public key, this may allow existing keypairs to remain quantum secure.
  • A post-quantum keypairs could keep their public key hidden, and only make their addresses public. Then, they just send all of their tx’s via a zk proof of knowing a valid signature that corresponds to their address, and that would authorize the transfer, so no one would ever even know their public key! If you required all transactions to also have a zk proof of knowledge seed phrase as Vitalik points out on ethresear.ch, then there’s a hash with the BIP-32 private key calculation, meaning that quantum computers can’t generate a valid proof with just discrete log – this means that it would even work on any accounts that have already sent any tx’s today (since those reveal public keys). With account abstraction-type solutions, this type of thing could even be possible as soon as full native AA is on by default on any L2 or L1! To be fully ensured that your coins won’t be taken until any hardfork, any user can easily send all their assets to a new keypair when they get scared of imminent quantum supremacy – this doesn’t by default reveal your ‘public key’ (which remember, can now be used to derive your private key) as long as you don’t send any transaction. This is exactly similar to how unused utxos in BTC are safe right now.
  • You’d have to make these account abstraction ECDSA proofs inside ZK-STARKs super fast to generate and verify, which we are pretty sure we can do via hardware acceleration and research like Circle Starks from STWU. Similar schemes like Picnic were proposed as early as 2017 and open sourced in 2020 as well, so kudos to them! You can also do this via MPC in the head if you don’t like STARK assumptions.
  • You’d imagine smart contracts might need to be special-cased, since we know the pre-image of the address via create2. Luckily, since there’s no seed phrase, those are secure! Until the hard fork, you can get additional protection by hard-coding that once a contract has been made by create/create2, transactions that utilize their secret key are disallowed (i.e. no signatures or eoa-style txs will be validated). For future smart contracts, if we don’t want to special case them until a hard fork, we could standardize around a new opcode (are we on create4 now?, or just do create2 with an optional arg), that, say, just swaps the last bit in the create2 output. This keeps the address determination deterministic, but does not reveal the pre-image of the hash.

I published an ethresearch post for quantum proof ECDSA keypairs to execute a transaction on Ethereum today, and a spec document for folks interested in more extensive background and implementation details. Kudos to Aditya for solving this end to end, with the first ever STARK proof natively verified on Ethereum! Repository here. It doesn’t have the extra hash step of seed phrase to private key, but that’s pretty easy to add :)

This is a very rapidly changing field, so these results will (hopefully) change and improve quicker than we expect!