Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
Gate MCP
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Blindly Racing AI: The Anxiety of Open-Ended Outcomes and the Prisoner's Dilemma
Writing by: BayesCrest
The core dilemma of the AI era is not just technological acceleration, but all主体 simultaneously falling into an open-ended prisoner’s dilemma: companies dare not stop because they fear competitors will complete AI-native restructuring first; employees dare not stop because they fear colleagues will finish skill distillation and agent migration first; investors dare not stop because they fear missing the next paradigm-level winner. As a result, everyone knows that overcompetition, excessive tokens, and anxiety may not be optimal solutions, but rational choices for individual主体 still lean toward continued acceleration.
Yesterday, I read an article titled “Everyone Maxxing Tokens, an Arms Race No One Dares to Stop,” a Silicon Valley anecdote from Meng Xing, partner at Wuyuan Capital. This is not just a simple Silicon Valley story, but a sample of the state transition in the AI world: it’s not merely a Silicon Valley account, but a record of AI transitioning from “tool efficiency” into “production function replacement / organizational restructuring / valuation system breakdown / social contract shocks.” The recurring keyword in the article is “lagging behind”: YC lagging, corporate safety rules lagging, token budgets lagging, xAI management lagging, researchers lagging, compute power / electricity / data centers lagging, DCF valuation frameworks lagging, social psychological resilience lagging.
The scene described in the article captures AI moving from an “application revolution” into a “production function revolution”: AI is no longer just a tool variable in software, but a shared disturbance source affecting enterprise production functions, talent structures, terminal valuation, capital expenditure, and social order.
The most important aspect of this article is not some anecdotes themselves, but the revelation of a state shift:
The core state is not “AI is very powerful,” but: old systems, old organizations, old valuations, old roles, old VC rhythms—all designed for a low-speed world—are now mismatched in an AI world that changes on a weekly basis. Mapping this into an AI World-State Migration Table:
The key signal is that AI is no longer “software feature upgrades,” but rewriting enterprise production functions. Yet, it’s not fully stable because on-call agents are unwieldy, PMF is out of sync, token expenditure and revenue growth involve huge conversion losses.
The most core insight: Token-maxxing ≠ Productivity Realization
The author asks teams claiming “100x efficiency improvements”:
Has efficiency increased 100x? Has revenue grown 100x?
The answer is clearly no. The observation in the article is that many teams indeed produce more, but do not simultaneously achieve PMF or 100x revenue growth.
This can be abstracted into a new metric:
TTCR: Token-to-Truth Conversion Rate
which is:
token consumption → product capability → user value → conversion rate to revenue / gross profit / retention / valuation.
Many companies are currently doing:
Token Burn ↑↑
Feature Output ↑
PMF ?
Limited revenue ↑
Moat ?
Valuation ?
This means:
In the future, we should not only look at AI adoption but also AI absorption—that is, whether companies truly integrate AI capabilities into their business closed loops, rather than just burning token budgets on upstream models and compute providers.
Everyone is competing, afraid of falling behind, afraid of being eliminated.
It’s a blind sprint with no visible end.
This stems from a deep-rooted human gene-level anxiety about an uncertain future, so everyone dares not stop, or the anxiety will be relentless. Now, I feel many around me are somewhat nihilistic—this is a blind sprint with no visible end.
And it’s not ordinary anxiety.
It’s a “Open-Ended Endgame Anxiety” unique to the AI era: humans are facing for the first time a technological leap that may continuously self-accelerate, constantly compress old orders, yet offers no clear endpoint. This aligns perfectly with the recurring “lagging behind” in the article: YC lagging, corporate safety rules lagging, engineers lagging, researchers lagging, valuation frameworks lagging, social psychology lagging.
At the most fundamental level: this stems from our genetic fear of “uncertain futures.”
The human brain is not designed for “open-ended exponential change.” Our ancestors faced risks like:
Is there food today?
Are there predators nearby?
Will the tribe abandon me?
Can winter be survived?
These risks, though terrifying, usually have boundaries.
The risks in the AI era are different:
Will my skills be replaced?
Will my industry disappear?
Will my asset valuation become invalid?
Does the world my children grow up in still need humans?
Is the effort I put in now still meaningful three years from now?
This is not a single risk but a fundamental instability of the world model itself.
So, the human brain enters a continuous scanning state:
Not because it sees danger, but because it doesn’t know where the danger might come from.
This is more torturous than known dangers.
Why do people “dare not stop”?
Because the current AI race is a typical prisoner’s dilemma + arms race + identity defense battle. Rational individuals might know:
“I need to rest, think, wait until things are clearer.”
But when they see others still running:
Others using Claude Code
Others launching 10 agents
Others releasing new products daily
Others raising funds
Others laying off staff for efficiency
Others token-maxxing
Others learning new tools
Others rewriting workflows
Their mental system automatically interprets: “If I stop, I might be left behind.” So, it’s not love for progress but fear-driven advancement. No one dares to stop, and this is crucial. It shows that the current AI race is no longer just opportunity-driven but driven by anxiety.
This is a multi-layered prisoner’s dilemma: unlike the classic two-person dilemma, in the AI era it’s nested across multiple layers: companies vs companies, employees vs employees, investors vs investors, nations vs nations, model companies vs model companies, startups vs startups.
Each layer exhibits the same structure:
Therefore, the most fundamental paradox is:
Everyone knows that slowing down, thinking carefully, and organizing better might be healthier; but as long as others don’t slow down, I can’t afford to.
This is the prisoner’s dilemma.
Company layer: Not AI-native might die, but AI-native might also burn out
The payoff matrix for companies looks roughly like this:
So, the rational choice for a single company is: regardless of others’ competition, I must compete. That’s a dominant strategy.
But the overall industry outcome is:
Token burn ↑
AI tool expenditure ↑
Repeated building ↑
Safety rules delayed ↑
Employee anxiety ↑
Layoffs accelerate ↑
True PMF may not synchronize ↑
In other words, companies form an AI-native arms race.
The cruelest part: if a company doesn’t compete, it might be eliminated; if it does, it’s not guaranteed to win. Because AI investment and commercial realization are not linearly related.
AI adoption ≠ AI absorption
Token spend ≠ Revenue growth
Agent count ≠ PMF
Code output ≠ Business truth
Being AI-native is not just seat legitimacy; AI absorption is.
Employee layer: Not learning AI means being replaced
Learning AI might also be training machines to replace oneself
The employee prisoner’s dilemma is even harsher.
So, employees reach the same conclusion: I can’t stop. But the more they try to AI-ify themselves:
Making their workflows explicit
Turning their abilities into replicable skills/agents/templates
This is the cruelest part:
To avoid being replaced by AI, employees must use AI to improve themselves; but the process of self-improvement may accelerate their system-based replacement.
This is not ordinary involution but a self-distillation-style involution.
In the past, employee involution meant working overtime, performance, education, experience, connections.
Now, it’s about:
Who prompts better
Who tunes agents better
Who builds workflows faster
Who turns experience into AI skills
Who can do the work of three people alone
But when one person can do three, the company naturally asks: “Then why do I need three people?” So rational individual effort ultimately leads to collective job compression.
The deepest paradox: AI turns “effort” into an unstable asset
In the past, effort had a relatively stable compound interest logic:
Learn skills
→ Accumulate experience
→ Increase scarcity
→ Gain income / status / security
Now, this chain is broken:
Learn skills
→ Skills are rapidly absorbed by AI
→ Scarcity declines
→ Need to learn the next skill
→ Re-absorbed again
Many people’s nihilism stems from here:
It’s not that they don’t want to work hard, but they don’t know where their effort is stored.
If skill half-life shortens, psychological changes occur:
This is why many feel nihilistic—not because they are lazy or pessimistic, but because they feel:
They are playing a game without save points, finish lines, or stable scoring rules.
Investor layer: Not investing in AI means losing, random investing in AI also loses
VCs and secondary investors face the same dilemma.
So, the dominant strategy for investors also shifts:
Must participate in AI, but cannot know if they are investing in winners or bubbles.
This leads to:
Neo labs with overestimated valuations
Crowded trading in AI infrastructure
Widespread vertical agents
SaaS being sold off
Capital rapidly shifting
Valuation frameworks losing anchoring
This is also a prisoner’s dilemma: every fund knows many AI projects will fail, but not investing risks missing the zero-to-100 leap. So, AI investing becomes: not because of certainty, but because the risk of not investing is too high. This mirrors employees’ “not learning AI leads to anxiety” and companies’ “not competing in AI leads to anxiety.”
National layer: AI is a national-level prisoner’s dilemma
Nations are in the same boat.
So, no country dares to truly stop. Even though everyone knows about AI safety risks, employment shocks, energy pressures, social divides, and model runaway risks, as long as one competitor continues accelerating, others cannot unilaterally slow down. That’s why AI safety is hard to solve through moral self-awareness alone.
It’s fundamentally a failure of global coordination.
This is not optimism but “Fear-Based Acceleration.”
In traditional tech cycles, people ran because they saw wealth opportunities. Now, it’s more complex. Many run not because they believe in a bright future, but because:
Stopping is scarier.
This is what I call: Fear-Based Acceleration. Its psychological structure is: Uncertainty ↑ → Control ↓ → Anxiety ↑ → Action to numb anxiety → The more action, the faster the world moves → The faster the world moves, the higher the uncertainty → Anxiety continues to rise. It’s a self-reinforcing loop. So, many appear busy, AI-native, highly efficient, but underneath, driven not by conviction but by fear.
Why nihilism?
Because AI not only replaces tasks but shakes three deeper foundations:
First, the meaning of effort is undermined
In the past, people believed: learn skills → accumulate experience → form professional barriers → gain stable returns.
Now, this chain is broken.
People ask:
Will what I learn today still be useful in two years? Will my ten years of experience be compressed by an agent workflow? Am I chasing the future or just chasing an ever-receding goal?
When the “effort → reward” path becomes unstable, nihilism arises.
Second, identity is undermined
Many derive self-worth from roles:
I am an engineer
I am a researcher
I am an investor
I am a designer
I am a salesperson
I am an analyst
But AI will deconstruct these identities into:
Which tasks can be automated?
Which judgments still need humans?
Which experiences have depreciated?
Which abilities can be distilled into skills?
This causes a deep sense of loss:
It’s not that I can’t work anymore, but that “who I am” becomes unstable.
Third, future narratives are undermined
People need a story about the future. The old story was:
Study
Work
Buy a house
Get promoted
Accumulate wealth
Raise the next generation
Retire
The AI era shatters this story. Now, many’s subtext is:
The world changes too fast; I can’t model my future self in five years. If the future can’t be modeled, what’s the point of effort now?
This is the root of nihilism. Not that they don’t care, but that they can’t find a stable meaning coordinate.
The essence of “blind sprinting”: no end, no referee, no pause button.
The most terrifying part of this race is not speed but the lack of a clear endpoint. In the internet era, there was a relatively clear end:
Who gets the users
Who gets the traffic
Who forms network effects
Who goes public
Who profits
But in the AI era, the endpoint is unclear:
Is AGI the goal?
Is ASI the endpoint?
Is self-training models the finish line?
Is replacing white-collar jobs with agents the endpoint?
Is exhaustion of compute power the endpoint?
Is regulatory intervention the endpoint?
Is societal backlash the endpoint?
No one knows. So, people are not running toward an endpoint but away from “being eliminated.” This is the cruel reality of blind sprinting:
You can’t see the end, but you hear everyone else’s footsteps.
This is not just emotion but a macro psychological state variable. It influences:
In investment, this nihilism itself is a signal
It’s not just emotional noise but a Social Legitimacy / Reflexivity signal.
When many shift from “excitement” to “nihilism,” it indicates AI has entered a second phase:
Phase 1: Amazement
Phase 2: Catch-up
Phase 3: Anxiety
Phase 4: Backlash
Phase 5: Institutional restructuring
We are currently likely between phases two and three, with some regions already entering phase four.
Narratives will continue to reinforce:
Because no one dares to stop, capital, companies, and individuals will keep investing. This sustains the demand for AI infrastructure, compute, token consumption, and agent toolchains.
But bubbles and overinvestment will occur simultaneously:
Many actions are driven not by rational ROI but by anxiety.
This leads to:
Ineffective agents
Excessive token consumption
Repeated startups
Widespread AI wrappers
Overvalued neo labs
Companies claiming to be “AI-native” just for appearance
Societal backlash will grow more significant
As anxiety spreads from Silicon Valley to ordinary white-collars, engineers, researchers, and outsourced workers, AI will no longer be just a technical issue but a political one.
This will bring:
Data center resistance
AI layoff regulations
Tax redistribution debates
Model safety regulations
Antitrust issues
Employment protection policies
For individuals:
The real solution is not “run faster,” but to rebuild a sense of control.
In such a world, blind acceleration only deepens nihilism. Without a judgment framework, the faster you run, the more you seem to be led by the times.
A better approach is to shift the question from:
“How can I avoid being left behind by AI?”
to:
“How can I build a continuously updating world model?”
Not predicting every future, but establishing:
State recognition
Hypothesis sets
Evidence updating
Counter-evidence mechanisms
Action routing
Position discipline
Methodological posterior
In other words:
It’s not about eliminating uncertainty but structuring it.
This is crucial. Anxiety stems from an inability to model. The value of methodology is to make the uncontrollable world partly controllable.
The final layer: the true test of this sprint is “mental resilience.”
The most scarce ability in the AI era may not be tool usage,
but:
The ability to maintain judgment amid uncertainty
The ability to keep rhythm in collective sprinting
The ability to retain subjectivity under technological shocks
The ability to acknowledge change without being swallowed by it
The ability to keep learning without becoming an anxiety machine
This is the real future differentiation. Ordinary people will be forced into:
Tool chasing mode.
The strong will enter:
World model updating mode.
The even stronger will adopt:
Constraint recognition + value capture + methodological posterior updating.
This is the core significance of building an AI-based system.
The greatest psychological shock of the AI era is not machine replacement of a position but humans facing for the first time an open-ended acceleration system without a clear endpoint, stable skills, or a definitive valuation terminal, and no pause button. Action shifts from opportunity pursuit to anxiety relief; token-maxxing becomes a psychological sedative; nihilism is the intermediate state before new meaning systems are established after old ones are shattered.
Thus, every individual, every company, every investor faces the same choice:
I don’t know where I’m heading, but I know stopping might be more dangerous.
This is the collective psychological structure of the AI era. It’s not simple optimism or mere bubble, but driven by uncertainty, relative competition, identity fears, capital pressures, and technological self-acceleration—a global open-ended prisoner’s dilemma.
The meaning lies here: others run to relieve anxiety, we should structure judgment to reduce anxiety; others compete fiercely, we should identify constraints, capture value, and understand terminal points and backlash.
The real response is not blind running nor lying flat, but replacing instinctive anxiety with structured world models, replacing group panic with evidence-based updates, and replacing blind sprinting with rhythm and discipline.
Actually, there’s no need to be anxious—everyone faces the same era, and in essence, everyone is the same.
The full text ends.