Zhejiang University research team proposes a new approach: teaching AI how the human brain understands the world

robot
Abstract generation in progress

Large models have continued to get bigger. The mainstream view is that the more parameters a model has, the closer it gets to the way humans think. However, a paper published by a Zhejiang University team on April 1 in Nature Communications put forward a different viewpoint (original link: https://www.nature.com/articles/s41467-026-71267-5). They found that when the scale of the model (mainly SimCLR, CLIP, DINOv2) increases, the ability to recognize specific things does improve further, but the ability to understand abstract concepts does not improve—in fact, it even declines. After the number of parameters increases from 22.06 million to 304.37 million, the performance on concrete concept tasks rises from 74.94% to 85.87%, while performance on abstract concept tasks drops from 54.37% to 52.82%.

Differences between how humans and models think

When the human brain processes concepts, it first forms a set of classification relationships. A swan and an owl look different, but humans still put them into the category of birds. Moving up another level, birds and horses can still be grouped into the broader category of animals. When people encounter something new, they often first think about what it resembles from past experience, and roughly which category it belongs to. People continue to learn new concepts, organize their experiences, and then use these relationships to recognize new things and adapt to new situations.

Models also classify, but their formation process is different. They mainly rely on patterns that repeatedly appear in large-scale data. The more often a particular object appears, the easier it is for the model to recognize it. When it comes to even larger categories, the model becomes much more strained. It needs to capture commonalities across multiple objects, and then group those commonalities into the same class. Existing models still have clear weaknesses here. As parameters continue to increase, concrete concept tasks improve, while abstract concept tasks sometimes decline.

The commonality between the human brain and models is that both internally form a set of classification relationships. However, their emphasis differs. The brain’s higher-level visual areas naturally distinguish broad categories such as living organisms and non-living things. Models can separate specific objects, but it is difficult for them to stably form these broader classifications. This difference makes it easier for the human brain to apply old experience to new objects, so when we face something we have never seen before, we can classify it quickly. Models, by contrast, rely more on existing knowledge, so when they encounter new objects, they are more likely to get stuck on surface features. The method proposed in the paper is developed around this characteristic: it uses brain signals to constrain the internal structure of the model, making it closer to the classification方式 of the human brain.

Zhejiang University Team’s Solution

The solution offered by the team is also quite unique—not simply piling on more parameters, but supervising the system with a small amount of brain signals. These brain signals come from recorded brain activity when people look at images. The paper’s original wording is: transfer human conceptual structures to DNNs. In other words, it teaches the model how the human brain classifies, how it generalizes, and how it puts similar concepts together—ideally by passing that on to the model.

The team ran experiments using 150 known training categories and 50 unseen test categories. The results show that as this training progresses, the distance between the model representations and the brain representations continuously shrinks. This change appears in both categories, indicating that the model is not merely learning individual samples, but is genuinely starting to learn a concept organization scheme that is closer to how the brain works.

After this training, the model learns better when there are very few samples, and it also performs better when facing new situations. In a task where it is given only a tiny number of examples but required to distinguish abstract concepts such as living things versus non-living things, the model improved on average by 20.5%, and it even outperformed the control models with much larger parameter counts. The team also conducted an additional 31 sets of specialized tests, and several models showed improvements close to one-tenth.

In recent years, the familiar path in the model industry has been larger model scale. The Zhejiang University team chose another direction, shifting from bigger is better to structured is smarter. Increasing scale is indeed useful, but it mainly boosts performance on familiar tasks. Humans’ abstract understanding and transfer ability are equally crucial for AI; this requires, in the future, making the AI’s thinking structure more like the human brain. The value of this direction lies in drawing industry attention away from purely scaling up models and back to the cognition structure itself.

Neosoul and the Future

This raises a bigger possibility: the evolution of AI may not occur only during model training. Model training can determine how AI organizes concepts and how it forms higher-quality judgment structures. After moving into the real world, AI’s next layer of evolution is only just starting: how AI agents’ judgments are recorded, how they are tested, and how they continue to grow and evolve through real-world competition—like humans learning and evolving themselves. This is exactly what Neosoul is doing now. Neosoul is not just producing answers for AI agents; it places AI agents into a system that continuously predicts, continuously verifies, continuously reconciles, and continuously filters—so that they optimize themselves through the cycle of predictions and results, keeping better structures and eliminating worse ones. What the Zhejiang University team and Neosoul are pointing to is actually the same goal: making AI not only good at answering questions, but equipped with comprehensive thinking capabilities—and evolving continuously.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin