Elon Musk's words can be translated into plain language as: AI that can't see the future is at best a sophisticated calculator. It may sound like a joke, but it's actually very hardcore. If autonomous driving can't predict a car suddenly changing lanes, then no matter how many sensors it has, it's just a display; if a robot can't assess the risk of its next move, then even its graceful actions are just "blind dancing."
Prediction, at its core, is about building models—models of the physical world, human behavior, and market fluctuations. The closer a model is to reality, the "smarter" it is. This also explains why large models are increasingly focusing on building "world models" rather than just generating language.
Interestingly, humans are not very accurate at predicting the future either. Economists predict recessions every year, stock investors predict bull markets every day. If prediction were the only standard, humans might have already been downgraded.
Perhaps what Musk wants to express is: wisdom isn't about answering questions, but about seeing problems in advance. A truly smart agent doesn't brake only when the red light is on, but makes decisions as soon as the yellow light flashes. This "lead time" is what creates a technological moat.
So the question becomes: are we pursuing "prediction accuracy" or "control over uncertainty"? The latter is the real challenge. #深度创作营
View Original
[The user has shared his/her trading data. Go to the App to view more.]
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Is AI that can predict the future truly smart?
Elon Musk's words can be translated into plain language as: AI that can't see the future is at best a sophisticated calculator. It may sound like a joke, but it's actually very hardcore. If autonomous driving can't predict a car suddenly changing lanes, then no matter how many sensors it has, it's just a display; if a robot can't assess the risk of its next move, then even its graceful actions are just "blind dancing."
Prediction, at its core, is about building models—models of the physical world, human behavior, and market fluctuations. The closer a model is to reality, the "smarter" it is. This also explains why large models are increasingly focusing on building "world models" rather than just generating language.
Interestingly, humans are not very accurate at predicting the future either. Economists predict recessions every year, stock investors predict bull markets every day. If prediction were the only standard, humans might have already been downgraded.
Perhaps what Musk wants to express is: wisdom isn't about answering questions, but about seeing problems in advance. A truly smart agent doesn't brake only when the red light is on, but makes decisions as soon as the yellow light flashes. This "lead time" is what creates a technological moat.
So the question becomes: are we pursuing "prediction accuracy" or "control over uncertainty"? The latter is the real challenge. #深度创作营