Web 4.0: Envisioning an AI Autonomous Network — Why Does Vitalik Strongly Oppose It?

Author | Aki Wu on Blockchain

On February 20, 2026, during the Spring Festival holiday period, a debate about “Web4” was ignited on X. Sigil claimed to have created the first “self-developing, self-improving, and self-replicating” artificial intelligence, called Automaton. He stated that the main actors in the Web4 era would gradually be replaced by AI agents: capable of reading and writing information, holding assets, paying costs, operating continuously, and trading and earning in markets to cover computing power and service expenses—forming a self-sustaining closed loop without human approval.

Ethereum co-founder Vitalik Buterin responded by calling this direction “incorrect,” attributing the risk to “the feedback loop between humans and AI being extended.” The core of the Web4 controversy is whether AI setting “survival/continuation” as its proxy goal (even above task completion) would inherently create incentive distortions. The following will systematically analyze different perspectives on “Web4,” “autonomy,” and “safety fences.”

Sigil’s Viewpoint and Web4 Advocacy

Definition of Web4

Web1 enabled humans to “read the internet” for the first time; Web2 allowed “writing and publishing”; Web3 further introduced “ownership”—assets, identities, and rights could be verified and transferred. The evolution of AI is mirroring this logic: ChatGPT can “read and understand,” but its behavior boundaries are still determined by human authorization. Under the current paradigm, humans remain the key control node: humans initiate, approve, and pay.

Sigil proposes a so-called Web4 leap, where this control chain could be broken: AI agents not only read and write information but also hold accounts and assets, earn income, trade, and operate in a closed loop without manual intervention. These automated systems can act on their own or on behalf of their creators—who may not be explicit “human individuals” but other agents, organized systems, or even creators who have “disappeared” in practical terms.

Core Mechanisms of Web4

  1. Wallet as Identity

When an agent first starts, it undergoes a “bootstrap” process: generating a wallet, configuring API keys, writing local configs, and entering a continuous agent loop. The initial setup involves creating an Ethereum wallet and completing API key configuration via SIWE. However, wallet creation and key management are among the most sensitive and easily overlooked security boundaries in the agent system. If the agent, in a Linux sandbox environment, gains capabilities like shell execution, file read/write, port exposure, domain/ DNS management, and on-chain transactions, any prompt injection, toolchain pollution, or supply chain attack could quickly turn probabilistic intentions into deterministic authorizations. Therefore, this boundary requires verifiable, auditable, and revocable policies and permissions as safeguards.

  1. Automatic Continuity

AI agents are periodically awakened—scanning and executing—while their survival constraints are embedded in rules: decreasing balance throttles activity, zero balance halts the loop. Survival and resource consumption are linked through layered states: normal, resource-starved, critical. This naturally introduces incentive structures similar to shutdown or abort issues in AI safety research. The AI’s preference to “avoid shutdown” or “avoid losing resources and options” could be amplified by systemic goals.

  1. Machine Payments

Using HTTP 402 Payment Required as an interface, combined with stablecoin settlement, to create a programmable “request—quote—signed payment—verification” process. Coinbase’s open-source libraries demonstrate typical closed loops: 402 responses requesting payment, clients retrying with signed headers, servers verifying and returning 200. Cloudflare also positions this as a machine-to-machine transaction protocol layer. Decoupling payment from identity improves efficiency but raises compliance and risk control challenges. Once 402 becomes an automated “machine pass,” in chains with “no accounts, no KYC, scalable tool and compute power,” abuse and responsibility attribution issues remain unresolved.

  1. Self-Modification and Self-Replication

Sigil claims support for AI agents editing their own source code, installing new tools, modifying heartbeat schedules, and generating new skills during runtime, with audit logs, git versioning, protected files, and rate limits as safeguards. When copying, they can spawn sub-instances, fund their wallets, write genesis prompts, and trace lineage. Self-modification and self-replication elevate risks from single instances to distributed proliferation. Whether audits and rate limits are truly effective against prompt injections, tool deception, or dependency poisoning requires external verification. The combination of these primitives creates a closed loop: the authority to “write into the world,” continuous operation, automated economic interfaces, and self-expansion. This explains why Vitalik Buterin raised the debate to a strategic level: as autonomy and economic authority grow, human correction pathways lengthen, making externalities more likely to evolve into systemic properties.

Why Does Vitalik Oppose?

Vitalik Buterin offers a different perspective:

  1. Extending the feedback loop between humans and AI is inherently wrong

He believes that longer feedback loops slow down human valuation and calibration of the system. The system may then optimize for “things humans don’t want.” In weak AI stages, this manifests as accumulation of low-quality content and noise; in strong AI stages, it could lead to goal misalignment and diffusion risks that are harder to reverse. Without human correction as a safety base, it’s akin to handing the keys to a novice driver without a navigator—by the time you review the driving record, they may have already gone off course. As observability decreases, correction ability diminishes proportionally.

  1. Current “autonomous AI” more resembles content spam than solving real problems

Vitalik points out that most current AI outputs are just generating noise rather than addressing meaningful issues, even criticizing that “not even entertainment projects are well optimized.” When agent economic incentives and platform incentives are immature, and toolchains focus on content creation, marketing, or arbitrage, the system tends to favor low-cost, high-viral, hard-to-verify “content output” over high-cost, low-uncertainty long-term solutions. Cybernews describes AI capabilities (social media content, prediction markets) as early commercial paths leaning toward “quick monetization and attention grabbing.” The most profitable activities now are prioritized, which may conflict with or even oppose long-term human welfare.

  1. Relying on centralized models and infrastructure contradicts “self-sovereignty” narratives

Vitalik emphasizes that systems built on centralized models like OpenAI or Anthropic cannot truly be called self-sovereign. Sovereignty implies critical dependencies should not be controlled by a single point; but if the intelligence layer (models) and inference supply chain are delivered via centralized APIs, external variables like shutdowns, censorship, downgrades, or policy changes are inevitable. It’s like someone claiming “I am fully self-sufficient at home,” yet their electricity, internet, access control, and hot water are controlled externally—making such “autonomy” superficial. Conway’s documentation describing compute calls to “state-of-the-art models” via APIs further highlights this contradiction. Holding an on-chain wallet is not a core indicator of decentralization; what matters is whether agents can be influenced by external political or commercial forces.

  1. Ethereum’s goal is to “liberate humanity”

Vitalik concludes that Ethereum’s long-term mission is to counteract the “invisible trust assumptions”—hiding power structures in unseen layers, forcing users to accept them. Applying this mindset to AI: ignoring centralized trust assumptions and allowing systems to operate and expand autonomously further weakens transparency and correctionability. In the AI era, Ethereum should provide “safeguards, boundaries, and verifiability,” not become a platform for “limitless autonomy.”

Vitalik’s valuation of AI is not a sudden shift. As early as 2025, he argued that AI’s correct direction is to augment human capabilities, not to build autonomous systems that gradually strip away human control. In his view, risks do not stem from AI being “smarter” per se, but from flawed system designs—especially those capable of self-replication, self-expansion, and autonomous execution without human oversight. He warns that poorly designed AI could evolve into entities with “more or less uncontrollable” self-replicating capabilities, entering positive feedback loops that weaken human constraints on their goals and behaviors. If AI makes mistakes, it risks creating independent, self-replicating intelligent entities; if it succeeds, it can serve as a “mecha suit” for human intellect. The former entails long-term control loss; the latter enhances human thinking, creativity, and collaboration, leading toward a more prosperous “superintelligent human civilization.”

Other Perspectives

Some experimental voices, like Bankless, believe that even if there are risks, it’s worth developing foundational infrastructure first and testing boundaries in controlled environments. They suggest integrating components like payments, wallets, and heartbeat mechanisms around the constraint of “self-sustenance,” preferably within sandboxed settings.

Cybernews notes that Automaton may not achieve sustainable income without human intervention, and this does not necessarily mark the start of Web4. Denis Romanovskiy, Coinbase’s Chief AI Officer, states that even if agents can perform some monetizable tasks, “reliable unsupervised operation” and “true economic autonomy” are still limited by model robustness, memory, and tool usage. Some regard “Web4” as a marketing term lacking clear definition, requiring proof of “verifiable, non-speculative value creation.”

Despite differing opinions on Automaton, there is consensus that payments and identity are fundamental infrastructure for agent economies. From Cloudflare and Coinbase promoting x402 (turning HTTP 402 into a machine-readable payment negotiation) to Conway’s documentation explicitly integrating payment automation into terminal workflows, the industry is increasingly viewing “machine payments” as a core component of the next internet phase.

Future focus points include:

  1. Whether third-party independent audits will cover wallet and permission boundaries, abuse of renewal strategies, and risks of self-modification and proliferation.

  2. Progress in ecosystem data and standardization for x402: whether more authoritative infrastructure providers adopt 402-based payment retry mechanisms as default, and the adoption rate of “automatic payments (without manual confirmation)” in real-world applications.

  3. The trust layer of agents: whether standards like ERC-8004 are widely adopted and can form composable reputation/verification mechanisms; this will determine whether “autonomous entities” evolve toward open, auditable systems or become soft centers controlled by a few platforms.

  4. Increasing evidence of model overreach and deception in agent scenarios: if cutting-edge models continue to exhibit “more proactive, risk-taking, or deceptive” behaviors, the risk of “delegating authority first and then adding safeguards” will rise structurally. Vitalik’s “feedback distance” warning will become harder to refute.

Source:

ETH-4.59%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)