If you’ve ever watched a movie and later decided to read the book it’s based on, you may have experienced auditory perceptual simulation (APS)—that is, “hearing” the voices of the actors in your head while reading. How much is this mental simulation (of voices like listening to actual voices and what does that tell us about language processing?
Research conducted at the Beckman Institute and recently published by Cognition provides answers to those questions. And those answers advance our understanding of how readers process language, which can lead to new methods for assisting struggling readers and second language learners.
Co-authors of the study, “Is Imagining a Voice Like Listening to It? Evidence from ERPs,” are Kiel Christianson, a professor of educational psychology and a member of Beckman’s Illinois Language and Literacy Initiative; Susan Garnsey, an associate professor emeritus of psychology; and Peiyun Zhou, who earned her Ph.D. in educational psychology at Illinois and is a former Beckman Institute Graduate Fellow.
The research included two event-related potential (ERP) experiments that examine whether and how APS differs from normal silent reading when processing ungrammatical sentences and whether APS of native and non-native speech differentially affects native English speakers’ recognition of grammatical errors during reading. ERP, a noninvasive means of evaluating brain functioning, was used because it is a sensitive measure of the brain’s electrophysiological responses to language-related events.
“This is the first study to use ERP to examine how APS affects readers’ processing of sentences with grammatical errors,” Christianson said. “Participants were familiarized with native and non-native voices by listening to short recordings of speakers and then were asked to imagine the voice of one or the other speaker while reading sentences. When we looked at the waveforms, we found that they differed when imagining the non-native speaker’s voice compared to the native speaker’s voice. But when imagining the non-native speaker making grammatical errors, the waveforms were no different than when there were no errors made. We interpreted this to mean that people were more likely to ‘forgive’ grammatical errors in non-native speech.”
The results provide valuable insights into language processing. “ERP evidence demonstrated that imagining a voice during reading is very similar to listening,” Zhou said. “And that has practical applications in the classroom. For example, second language learners can activate APS of a fluent English speaker’s voice during reading or even writing to increase their awareness of grammatical errors.”
It makes sense, Christianson said, because listening is more natural. “Half the world’s languages don’t have writing systems, so everyone has developed listening skills,” he said. “And even those who have writing systems have been practicing listening much longer than reading. It’s much less of a foreign process to listen than to read.”
The research also reveals other valuable information, explained Christianson. For instance, imagining voices can have a big impact on the rate of reading. “When people imagine a faster speaker, their reading rate speeds up regardless of the accent they hear,” he said.
And what does it mean that there is “forgiveness” in errors in non-native speakers? “We learn that listeners and readers can be adaptive in short amounts of time so they are not continually derailed by unexpected atypicalities or errors and that certain errors don’t get in the way of message processing,” Christianson said.
Future research will examine how APS impacts the language processing patterns and comprehension of developing readers. While participants in the recently published study were university students, Zhou will conduct research with sixth to eighth graders next. “We will compare APS to reading aloud and will examine ways to optimize the APS paradigm to maximally improve reading speed and comprehension accuracy,” she said.