Nvidia's Jensen Huang praises OpenClaw AI agent: 3 weeks of popularity surpasses 30 years of Linux development

robot
Abstract generation in progress

IT Home March 6 news: Technology media Wccftech published a blog post yesterday (March 5), reporting that at a Morgan Stanley conference, NVIDIA CEO Jensen Huang said that Agentic AI is approaching a major historical turning point.

Jensen Huang spoke highly of the open-source software OpenClaw, calling it “the most important software release of our time,” and emphasized that it took about 30 years for the Linux operating system to reach its current level of popularity, while OpenClaw achieved a complete surpassing in just 3 weeks, becoming the most downloaded open-source software in history.

Jensen Huang vividly compared the AI industry to a “five-layer cake” (covering energy, chips and computing infrastructure, cloud data centers, AI models, and ultimately the application layer), and said that it is the application layer that brings the greatest returns to hyperscale cloud providers and cutting-edge labs. AI agents like OpenClaw, in highly personalized environments, can precisely replicate and replace human workload.

According to the blog post cited by IT Home, OpenClaw has become widely popular not because its underlying implementation is extremely complex, but because it has proven to the world that AI can directly change consumers’ lives and significantly simplify tedious, repetitive tasks.

Jensen Huang explained that Agentic AI only needs a series of prompts to carry out tasks that would otherwise require a lot of time and professional knowledge (such as bulk web searches, image generation, and complex analysis). This has caused an approximately 1,000-fold surge in token consumption, directly creating a “computing power vacuum”: as long as Agentic AI continues to penetrate human work, no matter how large the existing hardware deployments are, they will be in a computing-power-limited state.

To address this computing-power imbalance, NVIDIA is adjusting the focus of its chip-architecture R&D. Previously, the Hopper and Blackwell architectures mainly focused on AI training workloads. However, facing the long-context processing bottleneck brought by Agentic AI, the upcoming Vera Rubin architecture will specifically address this limitation by significantly boosting on-board memory components and using platforms such as ICMS.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin