Pat Gelsinger, who recently stepped down from Intel's top spot, just dropped some interesting observations about the AI chip race. He's particularly bullish on what a major tech player has achieved with Gemini 3 and their custom silicon called TPUs.
What caught his attention? The sheer scale at which these tensor processing units are being deployed. We're talking about a company that's managed to spin up massive computational infrastructure to meet surging AI workload demands. For someone who spent decades navigating the semiconductor wars, that's not casual praise.
The timing's worth noting too. As traditional GPU supply chains stay tight, alternative architectures like TPUs are grabbing more spotlight. These specialized chips aren't just backup options anymore—they're becoming primary choices for specific AI tasks, especially in training large language models.
For the crypto and Web3 space, this matters more than you'd think. Decentralized AI projects and on-chain ML applications will increasingly depend on diverse chip ecosystems. When infrastructure competition heats up, innovation tends to follow.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
9 Likes
Reward
9
1
Repost
Share
Comment
0/400
ContractCollector
· 8h ago
This time Gelsinger has really seen it clearly: TPUs are indeed different... GPUs are almost maxed out, custom silicon is the future.
Pat Gelsinger, who recently stepped down from Intel's top spot, just dropped some interesting observations about the AI chip race. He's particularly bullish on what a major tech player has achieved with Gemini 3 and their custom silicon called TPUs.
What caught his attention? The sheer scale at which these tensor processing units are being deployed. We're talking about a company that's managed to spin up massive computational infrastructure to meet surging AI workload demands. For someone who spent decades navigating the semiconductor wars, that's not casual praise.
The timing's worth noting too. As traditional GPU supply chains stay tight, alternative architectures like TPUs are grabbing more spotlight. These specialized chips aren't just backup options anymore—they're becoming primary choices for specific AI tasks, especially in training large language models.
For the crypto and Web3 space, this matters more than you'd think. Decentralized AI projects and on-chain ML applications will increasingly depend on diverse chip ecosystems. When infrastructure competition heats up, innovation tends to follow.