Intel Corporation is releasing a signal that the market is underestimating: the power in AI computing architecture is flowing back from “GPU-centric” to the “CPU scheduling layer.” In the latest earnings call, CEO Patrick Gelsinger (aligned with the direction emphasized by Chen Liwu in related business communications) delivered a very critical structural judgment:



**Core turning point:** Over the past few years, high-performance computing has nearly been dominated by a single narrative: **GPU = compute core accelerator = the only growth engine.** But the latest industry feedback is showing that this is changing: **CPU is re-emerging as the “scheduling layer + control plane” of AI systems.**

**A structural shift in AI architecture:** A new round of AI systems is taking shape with a three-layer structure:

**GPU:** the compute execution layer, responsible for large-scale matrix calculations, and still the core for training and inference.
**CPU:** the system scheduling layer (rising again) performs task allocation, coordinates memory and resources, and controls workflows.
**Key change:** The CPU is no longer merely an “assistant,” but the **“command center.”**

**Accelerator ecosystem:** a specialized scene-optimization layer optimized for vertical tasks, **that is unifiedly scheduled by the CPU.**
View Original
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin