Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
a16z: Why the next billion AI users will access through trust networks
Author: Sakina Arsiwala, an a16z researcher; Source: a16z crypto; Translated by: Shaw Golden Finance
YouTube’s lesson: content is a geopolitical weapon
Years ago, I served as Google’s head of international search products, and then led YouTube’s international expansion, pushing the product into 21 countries in just 14 months. What I did wasn’t only localization—it was building local content partnerships, finding a way through the many minefields of law, policy, and market access. More recently, I was also responsible for managing Twitch’s community health (trust and safety). During my career, I’ve also founded two startups.
Today, the field of artificial intelligence (AI) has striking similarities to the early growth phase of Google and YouTube. My career has made one thing clear: globalization is not a product feature—it’s a geopolitical game. The deepest lesson is that distribution promotion has never been purely a technical problem. Growth depends on local partners, cultural translators, and trusted community opinion leaders—people who build a bridge between global platforms and local users.
I’ve personally experienced the GEMA copyright takedown in Germany: a music copyright agency nearly resulted in an entire country being excluded from YouTube’s pan-European promotion program. I’ve personally been through the controversy surrounding an arrest warrant for lèse-majesté in Thailand: as YouTube’s head of international, I faced the risk of being arrested over content on the platform that was deemed insulting to the King of Thailand, and I couldn’t even pass through the country. I’ve seen Pakistan cut off nationwide internet access to block a single video. I also remember an office attack in India, when global algorithms collided with local religious taboos.
What we truly need to address is never just policy or infrastructure issues, but barriers of trust.
In every market, someone has to pay the costs upfront—sorting out which content is safe, acceptable, and valuable—before users are willing to participate. These costs keep accumulating, and over time they form a kind of trust tax: a small group pays first, and everyone else shares the burden.
Now the same contradiction is resurfacing in AI, though the situation is more severe, the pace of change is faster, and the impact is more visible. The U.S. federal government and Anthropic have recently fallen into a stalemate, sparking public debate; OpenAI, meanwhile, is facing increasing scrutiny due to its partnerships with the public sector. We’re witnessing a shift: user acceptance is no longer determined only by usefulness—ideological influence is becoming deeper. In this environment, trust is extremely fragile. One seemingly minor collapse of trust can trigger a large-scale, rapid exodus of users.
Google is investing even more heavily in its deep-trust strategy, leveraging the familiarity of users in the Workspace and search ecosystems to connect markets—but the global landscape is becoming increasingly fragmented. The EU’s strict regulatory red lines, China’s intense AI development race, and the growing wave of AI nationalism all keep the world on high alert.
The lesson for 2026 is clear: institutional trust and cultural acceptance are now inseparable from the product itself. Without trust as the foundation, you can’t build an intelligent operating system.
This is the sovereignty wall—structural boundaries where global AI collides with local governance. And from a product perspective, it appears in a more direct form: trust barriers.
Every expansion of every global AI system will ultimately crash into this wall. At this tipping point, user acceptance no longer depends on technical ability—it depends on whether users, institutions, and governments trust it in their own contexts.
The internet used to be borderless. AI won’t be.
The end of the explorer era
The first billion AI users were explorers and technical optimists. But the explorer era is already over. Over the past three years, we’ve been living through the era of prompt engineering and digital alchemy—people open ChatGPT, Claude, and other popular applications as if going to a digital temple, witnessing the miracles of generative intelligence firsthand. In that era, the only metric that mattered was model capability benchmarked: Who topped the latest benchmark tests? Who has the largest parameter count?
But as we enter 2026, the bonfire of the explorer era is fading. We’re no longer building toys for the curious; we’re turning to intelligent operating systems—those invisible, everywhere-present underlying channels that provide daily operating power for individual entrepreneurs in São Paulo, Brazil, and community healthcare workers in Jakarta, Indonesia.
These users aren’t explorers—they’re people with practical needs. They don’t want to talk to the “ghosts” inside the machine; they just want a tool that helps them overcome obstacles in real life. This is the true moment of the leap that will win the next batch of one billion users. And in these margins that haven’t been fully developed, Silicon Valley’s global API dream collides with the era’s harshest reality: barriers of sovereignty.
The core shift is this: AI mass adoption is no longer mainly a model capability problem—it’s adistribution and trust problem. Frontier labs will continue to improve model performance, but the arrival of the next billion users won’t be because one model scored higher on a benchmark test; it will be because AI reaches them through the institutions, creators, and communities they already trust.
Reality in 2026: AI becomes a national infrastructure proposition
In 2026, the industry’s core challenge is no longer making models smarter, but enabling models to gain authorization. Barriers of sovereignty are the boundary where general intelligence meets national identity. Looking globally, the outlines of this barrier are already emerging: data localization requirements, national AI compute plans, and model projects led by governments across places like India, the UAE, and Europe. Initially, cloud infrastructure policies are rapidly evolving into intelligent sovereignty policies. Within this framework, countries refuse to become “data colonies,” requiring that intelligent systems serving their own citizens run inside the country’s sovereign data warehouses, inherit local culture, and respect national boundaries.
When you see the CEOs of Google (Sundar Pichai), OpenAI (Sam Altman), Anthropic (Dario Amodei), and DeepMind (Demis Hassabis) sharing the stage with Indian Prime Minister Modi at India’s AI Impact Summit in 2026, what you’re seeing is the real emergence of sovereignty barriers. Modi’s proposed M.A.N.A.V. vision (moral and ethical framework, accountable governance, national sovereignty, inclusive AI, trusted systems) sends a clear signal: if frontier labs try to rush to capture consumers directly, they will ultimately be淘汰 by regulation. And trust is the only currency that can cross these boundaries.
The dilemma of weakening network effects—and why it forces entirely new strategies
Unlike social platforms, where adding one more user increases value for all other users, much of AI’s value is rooted in localization. My first thousand prompts won’t directly make the system more valuable to you. Data flywheels can optimize models, but user experience is always personal rather than social. AI is a private tool that can carry emotional color, but at its core, it’s a practical tool.
This creates a structural problem: AI can’t rely on the compounding social network effects that helped the previous generation of platforms rise. In the absence of a native social graph, the industry can only fall into a high-consumption cycle—constantly chasing early users, power users, and tech elites. This strategy works in the explorer era, but it can’t scale to reach the next two billion users.
More importantly, this whole model will fail completely in the face of sovereignty barriers. Because when network effects are weak, trust doesn’t form spontaneously—it must be introduced from outside.
Transformation: from network effects to trust effects
If AI can’t rely on social network effects to drive adoption, it must rely on another force: a network of trust. This is the key shift:
From acquiring users to empowering intermediaries
YouTube can scale its expansion because it piggybacks on existing human networks of trust. AI must do the same. Instead of trying to build direct relationships with billions of users, the winning strategy should be:
empower those who already have user relationships;
leverage the trust they’ve already accumulated;
distribute intelligent capabilities through these channels.
Why it’s crucial
In a world shaped by sovereignty barriers:
distribution channels are limited;
direct-to-user models are fragile;
trust is localized, not globalized.
Without strong network effects, AI can’t achieve scale through brute force; it must penetrate through trust. AI doesn’t have network effects—it has trust effects.
Solution: the age of intermediaries is coming
How exactly did YouTube gain a foothold in international markets? It wasn’t by having a better player, nor by simply localizing interface text. The key was becoming the preferred platform for populations that already had local trust. In every market, the starting point for user acceptance isn’t YouTube itself—it’s identity anchors—individuals and communities that already hold cultural voice and influence:
Bollywood fan pages curate rare Shahrukh Khan clips for Dubai expat communities
American anime die-hard fans build an in-depth content ecosystem that mainstream media never covered
Local comedians, teachers, and video remix creators convert global content into forms that fit cultural understanding
These creators aren’t just uploading videos. They’re interpreting the internet for their audiences, acting as trust intermediaries, and building bridges between overseas platforms and local users. YouTube’s success lies in becoming the invisible infrastructure that supports these identity anchors.
The overlooked core logic: direct-to-consumer collides with sovereignty barriers
Most AI companies still operate under a direct-to-consumer mindset: build a better model → present it in a chat interface → directly acquire users.
This model works in the short term, but it’s hard to sustain. Because in high-friction markets, users won’t adopt new technology directly—they adopt technology through people they trust.
YouTube’s global expansion isn’t about persuading billions of users one by one. It’s about empowering those who have already earned audience trust. That’s what invisible infrastructure truly means: you don’t own user relationships—you provide support for them. And at scale, this model has a stronger moat.
From chat to agents: empowering trust intermediaries
This is the key shift from chat interfaces to agents. Chat is a tool for individuals, while agents provide leverage for intermediaries. If we apply the concept from Anthropic executive Amie Wowra—“build products for the most burdened people”—then in many markets, these people are precisely trust converters:
educators who adapt overseas ideas
entrepreneurs who navigate local bureaucratic systems
community leaders dealing with information overload
The path to winning is solving their trust latency—the gap between global intelligence capabilities and local practical scenarios. This requires a hands-on agent enablement support system:
For educators: Sora / GPT-5.2 re-create courses—replace American football analogies with cricket, preserve the core meaning while aligning with local culture.
For individual entrepreneurs: agents don’t just interpret Singapore tax forms—they can also complete filing and submission through local APIs.
For community leaders: add contextual memory to WhatsApp—extract structured action items from ten thousand messages, preserve useful information, and maintain community norms.
The core of a viable model: solving trust latency in the “last mile”
To understand why this model can scale, you must understand trust latency. In many parts of the world, the bottleneck isn’t access to technology—it’s the time, risk, and uncertainty required to build trust. Technology diffusion doesn’t happen through advertising—it happens through endorsement.
The mistake most AI companies make is trying to centralize and pay the trust tax through branding, distribution, or product refinement—but trust can’t be scaled that way.
The fastest path is to outsource the trust tax to people who have already borne that cost—local creators, educators, and operators rooted in the community. They’ve already tested with audiences, figured out what works, what fails, and what truly matters in local contexts, and they absorb the risk on behalf of the audience.
By empowering these trust intermediaries:
User acquisition costs approach zero: distribution relies on existing trust networks;
User lifetime value increases: practical features fit local needs rather than being generalized;
Adoption accelerates: trust is inherited directly, without starting from zero.
Enterprises will gain a globalized sales team that doesn’t need to be paid for, whose credibility, efficiency, and depth of rooting far exceed any centralized promotion strategy. You’re no longer building products for users—you’re providing leverage to the people users already trust.
This is the path of YouTube’s globalization—and the only way AI can cross sovereignty barriers.
Sovereign data warehouses: a geopolitical moat
The technical optimism advocated by Marc Andreessen ultimately doesn’t end in fighting regulation—it ends in productizing regulation. In competition with China’s DeepSeek and Kimi, victory isn’t about ignoring borders—it’s about controlling data warehouses.
What is a sovereign data warehouse? It’s a localization instance that prioritizes where the model runs, operating within a country’s digital public infrastructure (DPI) system.
Geopolitical moat: by granting digital sovereignty to countries such as India and Brazil over models, weights, and data, we fundamentally shift control arrangements. Intelligence capabilities are no longer mediated by overseas platforms; they’re governed autonomously within national borders. This isn’t a direct “lockout” of external rivals, but it greatly increases their cost of influence, reduces reliance from outside, and shrinks the risk exposure to being controlled, having data extracted, or suffering unilateral interference.
Identity anchors: tightly bind models with local culture and legal realities to build a moat that general-purpose AI can’t cross.
Feedback loops: solving extremely localized issues like Malaysia’s tax filing permissions isn’t a distraction—it’s a model accelerator. This provides cultural elasticity to the foundation model, keeping it at the global top tier of intelligence levels.
There’s a real contradiction here. AI’s vision is to achieve general intelligence, but the trend toward sovereignty is pushing the entire ecosystem toward fragmentation. If each country builds its own technology stack, we face risks of systems being incompatible with each other, safety standards varying widely, and duplicated resource development. The challenge facing frontier labs isn’t only scaling intelligence—it’s designing an architecture that enables local governance control while not weakening the advantages of global capability coordination.
Three structural shifts in the age of intermediaries
1. AI distribution will enter existing trust networks
AI won’t scale through standalone applications; instead, it will be embedded into instant messaging platforms, creator workflows, education systems, and the infrastructure for small and micro businesses—because trust has already been established in these contexts. Without strong network effects, distribution must rely on existing personal networks.
2. National-level AI infrastructure will become standard
Governments will increasingly require that key AI systems deploy localized models, build sovereign compute, or accept regulatory review. This will accelerate the rollout of sovereign data warehouse architectures.
3. The creator economy will shift to the agent economy
Creators won’t just produce content anymore—they’ll deploy agents to carry out real tasks for their communities. These agents will become extensions of trusted individuals, inheriting their credibility and transmitting intelligence capabilities through trust networks.
Of course, another kind of future is possible: an assistant that becomes absolutely dominant appears, deeply embedded in operating systems, browsers, and devices, directly establishing the connection between users and models while completely bypassing intermediaries. If that happens, the trust layer would be built directly into that assistant.
But historical experience points to a more diversified landscape. Even the most dominant platforms—from mobile operating systems to social networks—ultimately grow by leveraging ecosystems. Intelligence might be general, but trust is always localized. No matter which architecture ultimately wins, the core challenge won’t change: AI adoption is no longer mainly a model problem—it’s a distribution and trust problem.
Conclusion: niche markets are the real global markets
The biggest fallacy of the explorer era is believing that intelligence is a standardized commodity—a single global API that performs identically in a Manhattan conference room and in a village of Karnataka. Sovereignty barriers reveal a harsher truth: intelligence may be universal, but adoption is not.
National and local institutions don’t want a black-box external system. They want control, the ability to adapt to specific scenarios, and the right to shape intelligence within their own boundaries. They don’t want ready-made applications—they want underlying channels: infrastructure, safety systems, and compute resources, so citizens can build autonomously.
The growth logic of 2026 is no longer about finding a one-size-fits-all user experience; it’s about product elasticity—letting intelligence adapt to local scenarios, regulation, and culture without losing core capabilities. If we continue to chase global consumers directly, we’ll always remain just an external layer—fragile, substitutable, and destined to relive the many shocks I experienced at YouTube.
But when we shift toward empowering intermediaries, the model changes completely: moving from chat interfaces to agents, from persuading users to empowering trust intermediaries, and from fighting regulation to turning regulation into a moat.
AI scales not through models, but through trust.
The winner of the AI competition won’t be the company with the smartest model. It will be the one that can multiply local heroes—teachers, accountants, community leaders—by ten in capability. Because ultimately, intelligence is transmitted through systems, while adoption happens among people.