Summary: The brain constantly acts as a prediction machine, continuously comparing sensory information with internal predictions.
Source: Max Planck Institute
This is in line with a recent theory on how our brain works: it is a prediction machine, which continuously compares sensory information that we pick up (such as images, sounds and language) with internal predictions.
“This theoretical idea is extremely popular in neuroscience, but the existing evidence for it is often indirect and restricted to artificial situations,” says lead author Micha Heilbron.
“I would really like to understand precisely how this works and test it in different situations.”
Brain research into this phenomenon is usually done in an artificial setting, Heilbron reveals. To evoke predictions, participants are asked to stare at a single pattern of moving dots for half an hour, or listen to simple patterns in sounds like ‘beep beep boop, beep beep boop.’
“Studies of this kind do in fact reveal that our brain can make predictions, but not that this always happens in the complexity of everyday life as well. We are trying to take it out of the lab setting. We are studying the same type of phenomenon, how the brain deals with unexpected information, but then in natural situations that are much less predictable.”
Hemingway and Holmes
The researchers analyzed the brain activity of people listening to stories by Hemingway or about Sherlock Holmes. At the same time, they analyzed the texts of the books using computer models, so called deep neural networks. This way, they were able to calculate for each word how unpredictable it was.
For each word or sound, the brain makes detailed statistical expectations and turns out to be extremely sensitive to the degree of unpredictability: the brain response is stronger whenever a word is unexpected in the context.
“By itself, this is not very surprising: after all, everyone knows that you can sometimes predict upcoming language. For example, your brain sometimes automatically ‘fills in the blank’ and mentally finishes someone else’s sentences, for instance if they start to speak very slowly, stutter or are unable to think of a word. But what we have shown here is that this happens continuously. Our brain is constantly guessing at words; the predictive machinery is always turned on.”
More than software
“In fact, our brain does something comparable to speech recognition software. Speech recognizers using artificial intelligence are also constantly making predictions and are allowing themselves to be guided by their expectations, just like the autocomplete function on your phone.
“Nevertheless, we observed a big difference: brains predict not only words, but make predictions on many different levels, from abstract meaning and grammar to specific sounds.”
There is good reason for the ongoing interest from tech companies who would like to use new insights of this kind to build better language and image recognition software, for example. But these sorts of applications are not the main aim for Heilbron.
“I would really like to understand how our predictive machinery works at a fundamental level. I’m now working with the same research setup, but for visual and auditory perceptions, like music.”
About this neuroscience research news
OriginalResearch: Closed access.
“A hierarchy of linguistic predictions during natural language comprehension” by Micha Heilbron et al. PNAS
A hierarchy of linguistic predictions during natural language comprehension
Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction to guide the interpretation of incoming input.
However, the role of prediction in language processing remains disputed, with disagreement about both the ubiquity and representational nature of predictions.
Here, we address both issues by analyzing brain recordings of participants listening to audiobooks, and using a deep neural network (GPT-2) to precisely quantify contextual predictions.
First, we establish that brain responses to words are modulated by ubiquitous predictions. Next, we disentangle model-based predictions into distinct dimensions, revealing dissociable neural signatures of predictions about syntactic category (parts of speech), phonemes, and semantics.
Finally, we show that high-level (word) predictions inform low-level (phoneme) predictions, supporting hierarchical predictive processing.
Together, these results underscore the ubiquity of prediction in language processing, showing that the brain spontaneously predicts upcoming language at multiple levels of abstraction.