9 Minutes to Crack Bitcoin? The Boundaries and Misinterpretations of Google's Quantum White Paper

Doc: Max He @ Safeheron Lab

Key Takeaways of This Article

  • Google’s white paper significantly advances an engineering-style assessment of quantum risk, but it does not prove that CRQC is close to real-world deployment

  • A drop in resource estimates ≠ that real attack capability is already in place; there remain many engineering challenges that have not been overcome

  • What the industry needs to build is not only the capability to “adopt post-quantum algorithms,” but the capability to “respond to ongoing changes in cryptography”

  • 2030–2035 is the critical reference window for reverse-planning migration readiness, not the precise point when quantum attacks will arrive

On March 30, 2026, researchers from Google Quantum AI, together with the Ethereum Foundation and Stanford University, released a major white paper [1]. This 57-page paper systematically analyzes the threat that quantum computing poses to cryptocurrencies and provides, so far, the most aggressive resource estimates: breaking the 256-bit elliptic-curve cryptography relied on by Bitcoin and Ethereum requires fewer than 500,000 physical qubits—which reduces the previous best estimate by nearly 20x.

Meanwhile, the paper expands its discussion of quantum attacks from Bitcoin to the entire cryptocurrency ecosystem, further pointing out that potential quantum attack surfaces also exist in Ethereum mechanisms such as smart contracts, staking consensus, and data availability sampling. This means the white paper is no longer merely a single question of whether a Bitcoin private key can be broken by quantum computing. Instead, it is pushing the entire industry to re-examine: when existing blockchain systems face the evolution of quantum capabilities, which security assumptions may need to be re-evaluated.

This white paper has sent clear shockwaves through the blockchain industry. The claim that “quantum computing could break Bitcoin within minutes” spread rapidly, prompting many practitioners to re-examine longstanding security assumptions. What makes it trigger such a strong reaction is not only that the resource estimates have gone down further, but because, for the first time, it puts “whether an on-chain transaction window attack is possible” and “whether the blockchain system can complete migration in time” onto the same discussion plane. The problem is no longer just the academic question of “can it be broken,” but the engineering and governance question of “is there still enough time to prepare.”

But behind these emotions, a more important question is worth asking: what has Google actually proven? What has it not proven? To what extent does this work change our understanding of quantum risk?

It is worth noting that the impact scope discussed in this white paper is not limited to key-exposure problems in the style of Bitcoin. Rather, it extends to broader cryptocurrency system attack surfaces. Still, what this article focuses on more is the change this work brings to overall judgments of quantum risk, not the detailed impact of different on-chain mechanisms one by one.

1 What exactly did Google do this time?

1.1 ECDLP: The basic assumption behind blockchain security

The security of today’s mainstream cryptocurrencies is built on the elliptic-curve discrete logarithm problem (ECDLP) [2]. Taking the secp256k1 curve used by Bitcoin and Ethereum as an example [3], the core assumption is: under classical computing conditions, given a public key (a point on the elliptic curve), it is not possible to derive the corresponding private key within a feasible timeframe.

This assumption has been widely accepted over the past several decades and forms the foundational security premise of the entire blockchain system. However, Shor’s algorithm [4] shows that under an idealized quantum computing model, ECDLP can be solved efficiently, thereby undermining this security foundation in theory.

1.2 Resource estimates: How much quantum computing is needed to break it

The core of Google’s work is not to propose a new attack method, but to answer again a long-standing question: if, in the future, we truly can build a quantum computer large enough, stable enough, and capable of running quantum algorithms of this kind, how many computational resources would be required to break ECDLP?

The paper constructs and optimizes quantum circuits targeting secp256k1 and provides two different optimization implementation paths: one that reduces the number of logical qubits as much as possible, and another that reduces the number of non-Clifford gates (such as Toffoli gates) as much as possible. Under a set of explicit hardware and error-correction assumptions, these circuits can be executed at a scale of fewer than 500,000 physical qubits.

Compared with earlier mainstream estimates [5][6], this result shows clear improvement in a composite metric called “spacetime volume.” More importantly, it turns what was previously a mostly theoretical discussion into a set of engineering parameters that can be compared and tracked.

1.3 “9 minutes”: Where does this number come from

Beyond resource estimates, the paper also provides an intuitive magnitude for the attack time.

Assuming quantum gate operation time is on the order of microseconds and accounting for some execution overhead, running the full related quantum circuit roughly requires on the order of tens of minutes. Since part of the computation in the quantum algorithm can be completed before the public key appears, the computation truly related to the target public key can be compressed to about half the time, yielding the estimate of “about 9 minutes.”

This number draws broad attention because it is close to Bitcoin’s average block time of about 10 minutes. This means that under certain assumptions, an attacker could theoretically complete private-key recovery before transaction confirmation.

It also needs to be emphasized that this time estimate depends on a whole set of idealized premises. Its significance is more about providing a reference on the order of magnitude, not a direct reflection of real-world attack capability.

1.4 Zero-knowledge proofs: Why they don’t disclose the circuit

Another important feature of the paper is that, without disclosing the specific quantum circuits, it introduces a method of “verifiable disclosure” [7].

The research team commits to the circuit via hashing and, within a public verification procedure, checks the circuit’s behavior on a set of random inputs while also verifying its resource upper bound. The entire verification process is encapsulated as a zero-knowledge proof, enabling any third party to confirm the correctness of the related claims without accessing the circuit details.

This approach strikes a balance between “protecting attack details” and “increasing the credibility of the conclusion.” It also makes the resource estimates no longer just the researchers’ statements, but verifiable in a cryptographic sense.

2 How should we understand this?

Before further understanding these results, one concept is worth clarifying first.

The paper repeatedly mentions CRQC (Cryptographically Relevant Quantum Computer). This term translates directly to “cryptographically relevant quantum computer,” but it is not a generic term for quantum computers. Instead, it refers to quantum computing systems that already have real-world cryptanalysis capability. In other words, what the blockchain industry should truly care about is not whether quantum computing continues to advance, but when it will reach a point where it can break cryptographic problems such as ECDLP under real conditions.

From this perspective, the significance of Google’s work is not just showing progress in quantum computing itself, but more specifically answering a question: what scale of resources, execution capability, and time characteristics does a quantum computer need in order to pose a threat to real cryptographic systems?

Further, this question can be understood across three dimensions: the execution characteristics of the quantum computing system, the different paths that technological evolution may take, and the specific attack methods that these capabilities ultimately correspond to.

2.1 Fast clock and slow clock: There is more than one kind of quantum computer

One key perspective proposed in the paper is distinguishing among different types of quantum computing architectures.

Some platforms (such as superconducting qubits) have faster basic operation speeds and shorter error-correction cycles, enabling deep circuits to be executed within shorter time spans. Others (such as ion traps [14] or neutral atoms) operate more slowly but may have advantages in other aspects.

This difference implies that “quantum computing capability” is not a single metric. Even with quantum systems of the same scale, under different architectures, the practical attack capability against cryptographic problems may differ by orders of magnitude.

These differences in execution characteristics directly affect the way CRQC forms and its time structure: some systems are closer to completing computation within a short time window, while others are more suited to long-duration execution.

2.2 Two possible evolution paths

Based on the architectural differences above, we can further consider possible evolution paths for quantum computing capabilities.

One possibility is that quantum systems with faster execution capability reach fault tolerance first. In that case, real-time attacks on on-chain transactions (e.g., recovering the private key before a transaction is confirmed) become the main risk. Another possibility is that slower but more stable systems achieve breakthroughs first. In that case, attacks are more likely to focus on public keys exposed over long periods, such as historical addresses or keys reused more than once.

These two paths are not mutually exclusive, but they correspond to clearly different risk time structures and defensive priorities.

From this angle, the emergence of CRQC does not necessarily correspond to a single clear time point, and is more likely to manifest as a gradual process in which different capabilities become available over time.

2.3 Three types of attacks

Within the above framework, quantum attacks can be broadly divided into three categories.

The first is an “on-spend attack,” meaning recovering the private key within the time window after a transaction enters the mempool and before it is written into a block. The second is an “at-rest attack,” targeting public keys that have already been exposed on-chain for a long time, giving the attacker more time for computation. The third is an “on-setup attack,” targeting protocols that rely on certain public parameters, obtaining a one-time reusable backdoor through a single quantum computation.

The commonality among these three attack types is that they all rely on the same underlying capability—solving ECDLP within an acceptable timeframe—but their dependence on time windows and system structure differs.

From the outcomes perspective, these three categories are different manifestations of the same thing: once quantum computing capability reaches the level represented by CRQC, the specific impacts under different system conditions and time constraints.

3 How far is it really from real quantum attacks?

3.1 This white paper does not prove anything

Need to emphasize that although this white paper significantly advances the engineering-style assessment of quantum risk, it does not prove that CRQC is close to real-world deployment. It also does not prove that existing blockchain systems will face real, feasible quantum attacks in the near term.

What the paper actually does is, under a set of explicit assumptions, further compress the resource estimates required to break secp256k1, pushing what was previously a somewhat abstract discussion of risk into a position that is more suitable for engineering evaluation. What it proves is: the related problems are more specific than previously understood and are more worth continuously tracking. But it does not prove that the large-scale fault-tolerant quantum systems required to support these attacks are already right around the corner.

3.2 Resource requirements are going down, but the engineering distance is still obvious

Going one step further, between “quantum algorithms can theoretically break ECDLP” and “real-world quantum computing capability actually emerges that can threaten cryptographic systems,” it is not simply a straightforward engineering scale-up issue. What truly determines whether quantum attacks can be deployed is not only the resource estimate numbers on paper, but also the overall system capability: fault-tolerant architecture, error correction, real-time decoding, control systems, and the ability to execute deep circuits stably over long periods.

Some of these conditions do belong to engineering implementation problems. But they cannot be simply understood as: “as long as we keep investing, they will eventually and naturally be solved.” Quantum error correction and fault-tolerant computation provide scalable paths in theory. However, whether the real world can integrate these conditions into a CRQC that can be sustained and capable of threatening real cryptographic systems remains clearly uncertain.

From this perspective, a more accurate interpretation of Google’s white paper is not that it announces quantum attacks are imminent. Rather, it enables the industry for the first time to discuss this risk using more concrete engineering parameters, while also reminding us not to equate the decline in resource estimates directly with the readiness of real attack capability.

3.3 This is not a question suitable for precise year prediction

And precisely for this reason, the arrival of quantum attacks should not be understood as a single time point that can be predicted precisely. For the blockchain industry, what truly matters is not “in which year will CRQC surely appear,” but whether the related capabilities are evolving in a direction that is increasingly worth worrying about.

On the one hand, key breakthroughs could significantly change resource requirements in a short time. On the other hand, even a seemingly close technological roadmap may end up staying stuck at certain fundamental bottlenecks for a long time. This means it will be difficult to judge when real attack capability will emerge through linear extrapolation like “how many qubits this year, how many qubits next year.”

Therefore, a more robust understanding of this issue is not to bet on a single precise year. Instead, it acknowledges strong uncertainty, while focusing attention on those underlying signals that would truly change risk assessments.

3.4 The most worth worrying about is that warning signs may not be obvious

This also implies that the community should not expect to obtain a clear warning signal from any one “public demonstration of a quantum attack.”

Many people are used to treating public demonstrations as a sign of technical maturity—so it seems that if we have not yet seen a demonstration in the real world, the distance to actual threats must be far. But in the problem of quantum cryptanalysis, this intuition may not hold. By the time certain milestone demonstrations do appear, the relevant capability may already have been accumulated for a long time in deeper technical stages, and the defense window may already have narrowed significantly.

For the blockchain industry, this is exactly the hardest part to deal with: the changes that matter may not unfold in a clear, gradual, externally visible manner.

4 How should we judge quantum progress?

4.1 Don’t just look at the number of qubits

If Chapter 3 answers “roughly where we are now,” then the next question is: what should we look at in the future to judge quantum progress more accurately?

The easiest metric to spread—and also the easiest to misunderstand—is the number of qubits. It is intuitive and eye-catching, but for cryptanalysis capability, it is far from the only metric, and not even the most critical one. Simply increasing the number of physical qubits does not automatically mean the system is approaching real attack capability.

What is truly more worth focusing on is whether these qubits can be effectively organized under fault-tolerant conditions, whether they can reliably support the execution of deep circuits, and whether they form a closed loop with the algorithms and the control system. For the industry, “how many qubits” can at most indicate a change in scale, but it cannot, by itself, indicate how close a real threat is.

4.2 The three kinds of signals that truly deserve attention

If you want a relatively actionable judgment framework for quantum progress, you can focus on three categories of signals.

The first is hardware signals. What truly matters here is not only the number of physical qubits, but whether stable logical qubits are beginning to appear, whether error correction has entered a scalable stage, and whether the system can continue running under error-correction conditions.

The second is algorithm signals. This white paper from Google itself is a typical example. For the blockchain industry, more worth watching is not any single number on its own, but whether these resource estimates continue to decrease: whether the number of logical qubits declines, whether the number of critical gate operations declines, and whether the overall spacetime volume keeps converging.

The third is system signals. This is often the easiest to overlook. Even if both hardware and algorithms are improving, you still need to see whether system-level capabilities are gradually maturing—for example, the ability to execute deep circuits stably over long periods, the scalability of the control system, and whether multiple key conditions begin to be satisfied simultaneously. Real-world attack capability ultimately depends not on a single metric, but on whether these conditions can converge into a closed engineering pathway.

4.3 Public demonstrations can be reference, but they can’t be the only signal

Many people will naturally expect some kind of “milestone moment”—for example, an experimental platform publicly demonstrates running the relevant algorithm on a small-scale curve, and then everyone treats it as the signal that risk is truly starting to appear.

Such signals certainly have reference value, but they are not suitable as the only basis for judgment. From the perspective of technological evolution, public demonstrations are often just an outcome rather than the earliest change itself. What is more important is whether the underlying conditions mentioned earlier have already been gradually met.

For the industry, a more practical approach is not to wait for a dramatic moment, but to build a habit of continuous tracking: observe whether hardware has entered a new stage, whether algorithm resources continue to be compressed, and whether system capability is moving from “dispersed improvements” to “overall formation.” Compared with “when you see a demonstration,” a better question is: before we see a demonstration, have we already understood the direction of technical progress?

5 How should we judge quantum progress?

5.1 This is not “a problem for now,” but we must start preparing now

From an engineering reality perspective, quantum computing does not yet have the capability to launch attacks on existing cryptocurrency systems. Whether it’s hardware scale, error control, or the ability to execute deep circuits stably over long periods, there is still a clear gap between current reality and the conditions assumed in the paper.

But that does not mean the industry can keep postponing this issue indefinitely. Compared with the past, an important change is that the relevant technical pathways have become clearer, and resource estimates are continuing to converge. For blockchain systems, what truly needs attention is not any specific time point, but whether there is already enough time and space reserved for future migration.

Upgrading cryptographic infrastructure is often not a one-time simple software replacement. It involves protocols, implementations, ecosystem coordination, asset migration, and changes in user habits. Its time scale is usually measured in years, not months or quarters. From this perspective, this is not a “this will blow up now” problem, but it is a problem that must enter planning as early as possible.

5.2 Algorithms will change, but blockchain system design doesn’t need to be overturned

What quantum computing directly disrupts are the cryptographic assumptions that blockchain systems rely on—for example, elliptic-curve-based signature schemes—not the problem that blockchain systems as a class of security systems are facing.

This means that many security mechanisms that have already been proven effective today will not lose value because of the advent of quantum computing. For the blockchain and digital asset industry, whether it’s key management, multi-party computation (MPC), hardware isolation (TEE), access control, audit mechanisms, or the overall security architecture built around account systems, transaction approval, risk control, and governance—what these solutions still target are real-world problems such as key exposure, single-point failures, internal risks, and operational mistakes. These problems will not disappear as the underlying cryptographic primitives change.

So a more reasonable understanding is not “in the quantum era, the entire blockchain security system must be rebuilt from scratch,” but: what needs to be upgraded first is the underlying cryptographic components; what needs to be preserved and strengthened is the design principles that blockchain systems have already formed in key protection, permission layering, risk isolation, and governance control. What is truly important is not just replacing one signature algorithm, but enabling the whole system to carry out cryptographic migration of this kind.

5.3 From “which algorithm to pick” to “whether we can migrate smoothly”

Current post-quantum cryptography has entered the standardization and engineering deployment phase. The first batch of PQC standards led by NIST was formally released in 2024 [12], but different schemes still differ significantly in performance, signature size, implementation complexity, and security assumptions. Engineering practice and industry adoption pathways are also continuing to evolve.

Against this backdrop, the more important issue is changing from “betting on a specific algorithm too early” to whether the system has the ability to migrate smoothly.

This capability spans several layers: whether new signature schemes can be introduced without impacting business continuity; whether the system can support a period of hybrid mode; and whether it can still adjust and remain compatible as standards and engineering practices continue to evolve.

In the long run, what the blockchain industry truly needs to build is not only the capability to “adopt post-quantum algorithms,” but the capability to “respond to ongoing changes in cryptography.” The former is a single migration cycle, while the latter is a long-term sustainable system design.

6 Conclusion: An important technical signal

From today’s engineering reality, quantum computing is still not sufficient to pose a real-world threat to existing cryptocurrency systems. Whether it’s hardware scale, error control, or the fault-tolerant capability needed to execute deep circuits stably over long periods, there is still a clear gap from the conditions assumed in the paper. In other words, CRQC is not a technology that will naturally land “once time is up.” Its implementation still depends on a series of engineering challenges that have not yet been fully overcome.

At the same time, this problem is no longer suitable to be treated as an abstract discussion of the distant future. In March 2026, Google explicitly set its post-quantum migration timeline to 2029 [8]; the UK NCSC provided 2028, 2031, 2035 as three key migration milestones [9]. The G7 Cyber Expert Group’s roadmap for the financial system does not set a regulatory deadline, but it also treats 2035 as a reference target for overall migration and recommends that key systems prioritize completing migration in 2030–2032 [10].

Meanwhile, it’s also necessary to avoid over-interpretation. Based on the mainstream public information available today, even relatively aggressive public judgments more often shift the risk window forward to around 2030, rather than forming a consensus conclusion that “CRQC will be clearly deployed before 2030.” Global Risk Institute’s 2025 expert survey shows that within the next 10 years, CRQC appearing is “quite possible (28%–49%)”; only within the next 15 years does it enter the “likely (51%–70%)” range [11].

Therefore, the most important significance of Google’s white paper is not that it declares that quantum attacks have arrived. Instead, it makes this issue concrete for the first time: it can be discussed, it can be assessed, and it must begin to be prepared for. For the blockchain and digital asset industry, 2030–2035 is a critical window that deserves serious attention and leaves room for migration. It may not correspond to the exact year when quantum attacks truly arrive, but it very likely determines whether the industry will still have enough room to respond calmly by then.

BTC1.14%
ETH0.84%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin