Back to Blog
News

The brain in the machine: How AI could help explain how we think

adminDatabase Expert
April 25, 2026
6 min read
#Artificial Intelligence#Life sciences
The brain in the machine: How AI could help explain how we think
The brain in the machine: How AI could help explain how we think - Image 2
The brain in the machine: How AI could help explain how we think - Image 3

Scientists are using large AI models to predict patterns of brain activity at scale, a development that researchers say is pushing neuroscience toward a new kind of digital imaging.The recent release of Meta’sTRIBE V2, afoundation modeldesigned to predict neural activity across tasks, individuals and sensory inputs, brings that shift into focus. Its creators describe it as a system designed to generalize how the brain responds across contexts, drawing on datasets that link stimuli such as video, audio and language to recorded brain activity. Many researchers frame the broader goal as understanding how the brain turns perception into thought.“AI is a paradigm shift in science in general, but in neuroscience in particular,”Jean-Rémi King, a researcher at Meta AI and one of the authors of Tribe V2, toldIBM Thinkin an interview.Models like TRIBE v2 aim to learn from brain-recording data pooled across many studies, subjects and tasks at once, rather than being trained on a single experiment. King and others describe the approach as a step toward connecting findings across conditions and building more unified models of brain activity.“Like a new imaging technology or an invasive recording technique, I see large AI models as a tool for understanding how the brain works,”Takuya Ito, a Research Scientist at IBM Research, toldIBM Thinkin an interview.

For decades, the field of brain research has advanced through tightly controlled experiments that isolate specific variables, King said. Many researchers say that the approach produces precise insights, but also leaves the broader picture fragmented, with findings that do not always connect.King said TRIBE V2 attempts to address that fragmentation by training across modalities and conditions, learning how patterns of brain activity shift as inputs change. The model draws on datasets of people watching films, listening to speech and processing text, and maps predicted neural responses across brain regions.“TRIBE is the first foundation model for brain encoding, which is trained across modalities, brain areas, subjects and tasks,”Stéphane d’Ascoli, a Research Scientist at Meta AI, toldIBM Thinkin an interview.Training across so many different inputs and people gives the model unusual reach, King and others shared. It can generate predictions for stimuli it has never encountered, and in some cases, for individuals it has never studied. However, they caution that any such prediction still requires experimental testing.Limits remain, particularly around data. Ito pointed to the cost and difficulty of collecting brain measurements as a central constraint on the field, noting that imaging requires specialized equipment and controlled conditions.Because direct measurements of brain activity are hard to gather at scale, researchers have looked for workarounds. One is to build predictions from data that is already easy to collect, such as demographics or basic physiological markers, and use it to estimate what a given person’s neural activity might look like.The approach has pushed the field toward models that stretch datasets that already exist and produce predictions scientists can then test in the lab. Ito framed the shift as a way to speed up hypothesis generation, not as something that replaces the verification that follows.

TRIBE v2’s predictions work in part because of a finding that has emerged across the field over the past several years. The internal patterns of large AI models often correspond to measurements taken from living brains. Encoding models rely on this correspondence, and it has prompted renewed research into how much the two systems share at a structural level.One area of overlap, researchers say, is in how information is organized. In both AI models and brains, knowledge is distributed across a network of connections rather than localized in a single region.“Certain aspects of processing and the way concepts are stored in a networked structure bear resemblance between AI and neuroscience,”Stanislaw Wozniak, Staff Research Scientist at IBM Research’s Zurich lab, toldIBM Thinkin an interview.Those similarities have drawn attention to possible shared principles between artificial and biological systems. At the same time, researchers consistently warn that similar outputs do not imply similar mechanisms.“The brain is a wet piece of matter,” Ito said. “AI systems are not.” Differences in how the systems learn and respond to new conditions can lead to divergent behavior, particularly when faced with unfamiliar inputs, Wozniak said. Models can perform well under familiar conditions but fail at small variations in a task that biological systems handle more flexibly.“The way AI models learn and operate is different than the brain, and is typically much less robust to unforeseen circumstances,” Wozniak said.That gap has led some researchers to caution that strong predictive performance can be mistaken for deeper understanding, especially when outputs appear humanlike.“Confusing the ability of AI systems to predict with the ability to understand” remains a central misconception,Konrad Kording, a professor of neuroscience at University of Pennsylvania, toldIBM Thinkin an interview.What large AI models like TRIBE V2 actually explain about the brain is a matter of sharp disagreement. To some researchers, the line between prediction and explanation is the whole point. A model can reproduce a pattern of brain activity without revealing why the brain produces it. For other researchers, the engineering achievement and the scientific one are separate things, and conflating them is the mistake.“Large generative AI models are beautiful feats of engineering that may or may not be useful,”Karl Friston, a neuroscientist and theoretician at University College London, toldIBM Thinkin an interview. “By construction, they cannot advance our understanding of the brain. To understand is to explain.”Others place more weight on what the models can surface, even without full explanation, arguing that identifying patterns across complex datasets can guide further study and experimentation.“From that perspective, current large AI models are actually very helpful as tools for searching and interpreting vast amounts of neuroscientific data to discover patterns and improve our understanding,” Wozniak said.The clinical promise is what most excites King. He sees a near future in which neurological disorders are caught earlier, treatments are better matched to the patient and doctors rely less on the expensive imaging machines that dominate current practice.“How brain disorders are being diagnosed and taken care of today is fairly coarse,” King said.Closer ties between AI and neuroscience may also reshape AI itself. King pointed to differences in efficiency between biological and artificial systems as an area of  research interest, noting how little data and energy the brain requires compared with current models.“Humans learn to speak from a few million words of exposure, and with a 1.3 kg organ running on 20 watts,” King said.The field remains at an early stage, with many potential applications still emerging as models improve and datasets expand, d’Ascoli said. “I think we are only starting to scratch the surface of applications,” he added.

Comments (0)