“Nature Machine Intelligence” Study: Language Models from Artificial Intelligence Can Predict How the Human Brain Responds to Visual Stimuli
Guest professor at the Cognitive Computational Neuroscience Lab, Freie Universität Berlin, uses language models similar to those behind ChatGPT
№ 130/2025 from Aug 11, 2025
Large language models (LLMs) from the field of artificial intelligence can predict how the human brain responds to visual stimuli. This is shown in a new study published in Nature Machine Intelligence by Professor Adrien Doerig (Freie Universität Berlin) together with colleagues from Osnabrück University, University of Minnesota, and Université de Montréal, titled “High-Level Visual Representations in the Human Brain Are Aligned with Large Language Models.” For the study, the team of scientists used LLMs similar to those behind ChatGPT.
When we look at the world, our brains do not just recognize objects like “a tree” or “a car” – they also grasp meaning, relationships, and context. Until recently, scientists lacked tools to capture and quantitatively investigate this high-level visual understanding. In this new study, a team led by cognitive neuroscientist Adrien Doerig, guest professor at the Cognitive Computational Neuroscience Lab, Freie Universität Berlin, used LLMs to extract “semantic fingerprints” from scene descriptions.
The researchers used these “semantic fingerprints” to model functional MRI data recorded while participants viewed everyday images, depicting scenes such as “children playing Frisbee in the schoolyard” or “a dog standing on a sailing boat.” Leveraging LLM representations allowed the team to predict neural activities and to decode textual descriptions of what the people were seeing based only on the neuroimaging measurements.
To predict the semantic fingerprints directly from the images, they also trained computer vision models. These models – guided by linguistic representations – aligned better with human brain responses than state-of-the-art image classification systems.
“Our results suggest that human visual representations mirror how modern language models represent meaning – which opens new doors for both neuroscience and AI,” says Doerig.
Further Information
Publication
The study, “High-Level Visual Representations in the Human Brain Are Aligned with Large Language Models,” was published in Nature Machine Intelligence and is available online: https://doi.org/10.1038/s42256-025-01072-0.