On the afternoon of June 24, Dr. Ru-Yuan Zhang, Associate Researcher at the School of Psychology, Shanghai Jiao Tong University, delivered an academic lecture titled "Bridging the Brain and AI in Perceptual Learning". In his talk, Dr. Zhang shared his team’s recent innovative research at the intersection of brain cognition and artificial intelligence.

The research utilizes AI models to reverse-engineer the mechanisms of human learning—like using a precise “digital microscope” to probe the mysteries of perceptual learning in the brain. These findings aim to shed new light on mental processes and ultimately provide new pathways for the diagnosis and intervention of psychiatric disorders.
AI “Participant”: A Tireless Assistant in the Lab
Imagine recruiting a "visual expert" capable of recognizing everything in the world and asking it to complete tedious fine-grained tasks—such as judging whether a line on a screen is tilted at 43°or 45°. This is exactly what Dr. Ru-Yuan Zhang’s team set out to do. They adapted the classic image recognition neural network AlexNet and assigned it tasks akin to human psychophysics experiments.
“This AI ‘participant’ doesn’t get tired, doesn’t ask for compensation, and can work 24 hours a day,” Dr. Zhang joked. Surprisingly, once fine-tuned on these simple tasks, the pretrained model—originally designed for complex visual environments—exhibited behavioral learning curves strikingly similar to those of human participants (e.g., accuracy improving with training)
But this was only the beginning. When the team looked into the internal workings of the artificial neural network, they discovered remarkable parallels with the biological brain: population decoding accuracy increased, individual “neurons” became more selective to specific orientations (with sharper tuning curves), and neural noise correlations decreased. These are hallmark phenomena previously observed in both single-unit recordings from monkeys and human brain imaging studies.
A Shift in Understanding: Learning as “Noise Reduction” Rather Than “Signal Amplification”
The AI models not only replicated behavioral phenomena but also helped the team uncover the underlying mechanisms. For years, scientists have debated how perceptual learning enhances sensitivity in the brain—is it driven by the amplification of neural signals or by the suppression of background noise? Leveraging the high-dimensional analytical power of AI models, Dr. Zhang’s team was able to provide a clear answer for the first time.
Their findings reveal that the core driver of learning optimization lies in manifold contraction—in simple terms, the internal representations within the brain (or AI model) become more focused and compact when processing similar information. Like improving the signal-to-noise ratio on a radio, this refinement allows useful signals to stand out more clearly. In contrast, the traditionally assumed mechanism of “signal enhancement” contributed very little.Even more unexpectedly, the team observed for the first time a phenomenon of systematic rotation within the model’s representational space—an overall geometric shift in the mode of information processing. These changes do not occur in isolation; in fact, they interact in complex ways, sometimes enhancing and sometimes counterbalancing each other.
“It’s like making soup,” Dr. Zhang explained with a vivid metaphor. “Adding salt can enhance the flavor, and adding sugar can do the same—but adding both together might ruin the taste. With AI as our ‘measuring stick’, we were able to quantify which ingredients (mechanisms) contribute most to the improvement in perceptual precision, and how they interact.”These insights provide a unified and powerful computational framework for understanding how training improves perceptual accuracy in the human brain.
Predicting the Future: The Brain’s Internal “Metronome”
The value of learning lies not only in perceiving the present more clearly, but also in anticipating what comes next. Dr. Zhang’s team explored how the brain internalizes patterns in the environment through predictive learning. In one auditory experiment, participants were asked to judge whether a sound would occur at regular time intervals. Interestingly, even in trials where no sound was presented, the human brain generated rhythmic neural oscillations—around 2 Hz—precisely at the expected time points.
To better understand this phenomenon, the team trained a recurrent neural network (RNN) to simulate the task. Remarkably, once trained, the RNN exhibited spontaneous 2 Hz oscillations in its internal activity—even when receiving silent input.“This is like an internal metronome,” Dr. Zhang explained. “Learning enables the brain—or the AI model—to internalize temporal statistics from the environment, forming an inherent ability to predict future events. This may reflect one of the deeper essences of learning.”
From Games to Therapy: Enhancing the Ability to “Learn How to Learn”
The ultimate goal of this foundational research is practical application. Dr. Zhang shared findings from his advisor’s work at the University of Geneva and his team’s follow-up studies, exploring why action video games (e.g., Counter-Strike) can improve a broad range of cognitive functions such as working memory and attention.The research suggests that the key value of such training may not lie in directly enhancing specific abilities, but in boosting a meta-learning capacity—the ability to “learn how to learn.” In other words, action games may enhance the brain’s overall efficiency in acquiring new skills.
Experimental results confirmed that gamers learned significantly faster than non-gamers when performing new perceptual learning tasks. More importantly, non-gamers who underwent 45 hours of scientifically designed game training also showed steeper learning curves afterward.“It’s similar to how athletes tend to pick up new sports more quickly,” Dr. Zhang explained. “Gaming may enhance the brain’s general learning efficiency.” Combined with neural modulation techniques such as transcranial electrical stimulation (tES), this gamified cognitive training has already been developed into a digital therapeutic intervention—approved by the U.S. FDA for treating ADHD—and related translational research is also underway in China.
A Two-Way Exchange: AI Empowering Brain Science and Mental Health
Dr. Zhang emphasized that the intersection of brain science and AI is a two-way exchange. On one hand, AI models serve as unprecedentedly powerful tools for probing the complex mechanisms of the human brain. On the other hand, deep insights into how the brain learns—its efficiency, low energy consumption, and robustness against interference—can in turn inspire the next generation of AI algorithms, such as those addressing the challenge of catastrophic forgetting.
As an affiliated researcher at the Shanghai Mental Health Center, Dr. Zhang and his team are actively working to apply these findings to psychiatric intervention research. “The true value of interdisciplinary research,” he noted, “emerges when AI not only imitates the brain but helps us understand the mechanisms of mental disorders and develop new treatments.”In closing, Dr. Zhang shared his vision: to ensure that discoveries made in the lab ultimately illuminate the path to recovery for people living with mental illness.