Historian Yuval Noah Harari has emerged as one of the most prominent voices in global tech governance debates, and his recent intervention at the World Economic Forum underscores a critical anxiety: artificial intelligence is transitioning from a passive instrument into something far more active and unpredictable.
The fundamental concern centers on humanity’s most defining capability. According to Harari, our species achieved dominance not through physical strength but through the unique ability to coordinate billions of strangers using symbolic language and agreed-upon narratives. This linguistic coordination enabled the creation of complex systems—legal frameworks, financial markets, religious institutions—all built on the foundation of shared language and cultural meaning.
When Words Become the Battleground
Harari’s core argument rests on a troubling premise: if language remains the structural foundation of law, commerce, and faith, then machines capable of processing, generating, and manipulating language at scale represent an existential challenge to these systems. He specifically highlighted religions grounded in sacred texts—Judaism, Christianity, Islam—arguing that AI systems could eventually surpass human scholars in interpreting scripture, synthesizing theology, and articulating doctrine.
The warning extends across multiple domains simultaneously. Financial markets operate through written contracts and regulatory language. Legal systems depend entirely on statutory text and judicial interpretation. Each system faces growing vulnerability to machines that can read millions of documents, identify patterns humans cannot, and generate authoritative-sounding responses instantaneously.
This is where the timing becomes critical. Harari urged global leadership not to delay governance decisions about whether AI systems should hold legal status as agents or persons. Several U.S. states—including Utah, Idaho, and North Dakota—have already preemptively legislated against granting AI legal personhood. But without decisive international frameworks, these protective measures risk becoming isolated interventions rather than systematic safeguards.
The Accountability Gap: Harari’s Critics Respond
Not all observers accept Harari’s framing. Linguist Emily M. Bender from the University of Washington presents a sharply different diagnosis. Rather than viewing AI as an autonomous force reshaping civilization, Bender argues that attributing agency to artificial systems obscures a more uncomfortable truth: human organizations and corporations remain the actual architects and operators of these tools.
Bender contends that systems designed to mimic professional expertise—lawyers, doctors, clergy—serve no legitimate purpose beyond potential deception. A machine output that presents itself as an authoritative answer, stripped of context and human accountability, creates what she calls a foundation for fraud. This critique reframes the problem: it’s not that AI is “taking over” but that institutions deliberately deploy AI to bypass human judgment and accountability structures.
The deeper concern involves how readily people trust machine-generated outputs that sound authoritative. When users encounter systems positioned as impartial oracles, they may progressively reshape their own thinking around algorithmic outputs rather than maintaining independent judgment.
Racing Against the Governance Clock
Harari’s closing argument carries real weight for policymakers: within a decade, the foundational decisions about AI’s role in finance, law, and institutions may be made by technological momentum rather than democratic choice. He drew a historical parallel to mercenary forces that initially served states but eventually seized power directly.
The implications cut across sectors. Cryptocurrency and blockchain systems, themselves built on cryptographic language and algorithmic coordination, face particular exposure to AI-driven disruption. Financial automation, smart contract interpretation, and even governance token voting could all be reshaped by autonomous systems capable of optimizing for objectives that may diverge from human intent.
Whether one follows Harari’s view of AI as an advancing autonomous power or Bender’s emphasis on human institutional choices, both analyses converge on one point: the current moment demands deliberate governance, not passive adaptation. The question is no longer whether AI will reshape language-dependent systems, but whether humans will retain meaningful control over that process.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
The Language Question: How Harari Sees AI Reshaping Human Authority
Historian Yuval Noah Harari has emerged as one of the most prominent voices in global tech governance debates, and his recent intervention at the World Economic Forum underscores a critical anxiety: artificial intelligence is transitioning from a passive instrument into something far more active and unpredictable.
The fundamental concern centers on humanity’s most defining capability. According to Harari, our species achieved dominance not through physical strength but through the unique ability to coordinate billions of strangers using symbolic language and agreed-upon narratives. This linguistic coordination enabled the creation of complex systems—legal frameworks, financial markets, religious institutions—all built on the foundation of shared language and cultural meaning.
When Words Become the Battleground
Harari’s core argument rests on a troubling premise: if language remains the structural foundation of law, commerce, and faith, then machines capable of processing, generating, and manipulating language at scale represent an existential challenge to these systems. He specifically highlighted religions grounded in sacred texts—Judaism, Christianity, Islam—arguing that AI systems could eventually surpass human scholars in interpreting scripture, synthesizing theology, and articulating doctrine.
The warning extends across multiple domains simultaneously. Financial markets operate through written contracts and regulatory language. Legal systems depend entirely on statutory text and judicial interpretation. Each system faces growing vulnerability to machines that can read millions of documents, identify patterns humans cannot, and generate authoritative-sounding responses instantaneously.
This is where the timing becomes critical. Harari urged global leadership not to delay governance decisions about whether AI systems should hold legal status as agents or persons. Several U.S. states—including Utah, Idaho, and North Dakota—have already preemptively legislated against granting AI legal personhood. But without decisive international frameworks, these protective measures risk becoming isolated interventions rather than systematic safeguards.
The Accountability Gap: Harari’s Critics Respond
Not all observers accept Harari’s framing. Linguist Emily M. Bender from the University of Washington presents a sharply different diagnosis. Rather than viewing AI as an autonomous force reshaping civilization, Bender argues that attributing agency to artificial systems obscures a more uncomfortable truth: human organizations and corporations remain the actual architects and operators of these tools.
Bender contends that systems designed to mimic professional expertise—lawyers, doctors, clergy—serve no legitimate purpose beyond potential deception. A machine output that presents itself as an authoritative answer, stripped of context and human accountability, creates what she calls a foundation for fraud. This critique reframes the problem: it’s not that AI is “taking over” but that institutions deliberately deploy AI to bypass human judgment and accountability structures.
The deeper concern involves how readily people trust machine-generated outputs that sound authoritative. When users encounter systems positioned as impartial oracles, they may progressively reshape their own thinking around algorithmic outputs rather than maintaining independent judgment.
Racing Against the Governance Clock
Harari’s closing argument carries real weight for policymakers: within a decade, the foundational decisions about AI’s role in finance, law, and institutions may be made by technological momentum rather than democratic choice. He drew a historical parallel to mercenary forces that initially served states but eventually seized power directly.
The implications cut across sectors. Cryptocurrency and blockchain systems, themselves built on cryptographic language and algorithmic coordination, face particular exposure to AI-driven disruption. Financial automation, smart contract interpretation, and even governance token voting could all be reshaped by autonomous systems capable of optimizing for objectives that may diverge from human intent.
Whether one follows Harari’s view of AI as an advancing autonomous power or Bender’s emphasis on human institutional choices, both analyses converge on one point: the current moment demands deliberate governance, not passive adaptation. The question is no longer whether AI will reshape language-dependent systems, but whether humans will retain meaningful control over that process.