Shih an Innovator in Language Research, Teaching

Beckman Fellow Sarah Brown-Schmidt tries out the new AG500 3-D articulograph installed at linguistics researcher Chilin Shih's laboratory at the Beckman Institute. The rare electromagnetic articulometer, called EMA, is used to measure physical movements during speech production.

Chilin Shih is using cutting edge technology and innovative teaching methods in her research into second language acquisition.

Perhaps the most common complaint of people learning to speak another language is that they know the rules and the words but still have difficulty communicating with native speakers.

Chilin Shih of the Beckman Institute is creating solutions for the problem through her research into effective methods for language acquisition. Shih, a member of the Cognitive Neuroscience group and an Associate Professor in the departments of Linguistics and in East Asian Languages and Cultures, discovered a simple yet effective technique for learning to speak a second language fluently: require students to give a talk, mistakes and all, in front of a group.

That technique, perfected in a program for English speakers learning to speak Mandarin Chinese, is modeled on Toastmasters International(R), a nonprofit organization that helps people develop public speaking skills. Shih found that by having students give public speeches in the language they were learning, they became more fluent in that language. The program not only benefited the students taking part but has led to research by-products such as a database for second language learning that already has an application in the form of an adaptive learning system.

Shih comes at the problem of fluency from the perspective of a linguist but her approach and the research lines branching out from that approach are truly interdisciplinary. She has collaborations with researchers from speech and hearing science, psychology, computer science, and electrical and computer engineering, as well as with other linguists. Her work includes the creation of databases, computational modeling of speech, work on applications like speech synthesizers, and using state-of-the-art equipment like a rare articulograph she just purchased for her lab. Through all the collaborations and projects, however, Shih's research still focuses on a central theme.

"I am very interested in finding the most effective way to get a language learner to learn a new language, any aspect of it," she said. "All of us are attuned to our native language and we're very good at that from age three. Once we pass a certain age it becomes more and more difficult. It's possible to learn but it's not as natural or automatic to pick up the sounds of another language. What I'm trying to do here is to implement software using an adaptive system to help people to learn."

Shih earned a Ph.D. in linguistics from the University of California at San Diego with an emphasis on Chinese tones. What Shih didn't learn in graduate school was anything to do with speech technology, such as computational speech models or speech synthesizers, which would later play an important role in her postdoctoral career.

"Until the day I graduated it was paper and pencil," she said. "Then it took off from there; every two or three years it was like I got another Ph.D."

Shih's career path veered from business to academia to business again before settling in at the University of Illinois. After earning her Ph.D. she got a job with Bell Labs working in their renowned speech laboratory.

"When they called me to say they had a position working on a Mandarin speech synthesizer I had to ask 'what is a synthesizer?'" Shih said with a laugh. "I didn't know that word. They said 'we know how to build a synthesizer'; what we don't know are Chinese tones.' I said 'I know everything you need to know about Chinese tones, that is my dissertation topic.' So I went there and it changed my life."

"Sometimes people say 'well, I'm too old to learn a language.' But we think that a combination of teaching methods will really tell us how to learn to correct accents in a more efficient way." - Chilin Shih

At Bell Labs, Shih discovered the type of interdisciplinary atmosphere she would later find at Beckman. She also found her husband, fellow Beckman researcher Richard Sproat, while working there. Shih and Sproat, an expert in language systems and computational modeling, collaborate on projects, including one for second language fluency assessment that incorporates database recordings of her students' public speeches.

"I call it the variety show, recording as they give a speech," Shih said. "Then we transcribe it, we annotate it in whatever linguistic way we can think of. Then we conduct all kinds of natural language processing: prosody analysis, word frequency analysis, word usage analysis and error pattern analysis, just to see how language learners behave differently from native speakers."

Using public speaking as a teaching method for second language learning was a novel approach and recording those speeches into a database has served as a fruitful research platform.

"Current language teaching is based on impressionistic evidences of what students are doing or not doing," Shih said. "We're used to doing corpus analysis here. With corpus analysis we know that there is a big gap between people's impressions and the reality. Using a large database like this, we can actually understand exactly what happens when people are giving spontaneous speeches, rather than just reading or memorizing a textbook.

"I want (students) to pull together the grammar and vocabulary that they have learned together over the previous years and to put them into coherent thoughts that will work for them to express what they want to say, rather than memorizing what the text says. That has been very successful."

The database of the students' speech recordings has helped Shih create a unique adaptive language learning system that can be used by learners using an iPod or computer. The software the group created adapts to each language learner's level of proficiency in order to address his or her specific needs. It focuses on the acquisition of phonemes, or the differentiated sounds of a language, as the way to fluency.

Shih has found that the control of intonation and timing play an important role in acquiring phonemes and, therefore, learning to speak a second language effectively. Shih said learning how and when to use tone correctly is an important step in becoming fluent in Chinese.

"If I am saying a short word out of context tone is essential, but in sentence context a language learner has to learn which tones are important and which tones are not important," she said. "That relates to the fluency issue; to use it appropriately is a fluency issue."

The software program reacts to the variance between learners' speech and native speech by exaggerating the tonal differences when the learner has difficulty identifying speech sounds in the second language, and decreasing the contrast when the language learner is approximating native speakers' ability. Shih reports that students with less than six hours of training in their group's program improved, on average, eight percent of their tone recognition, a much better performance than with 40 hours of training in another program. The program also showed the best results in controlled experiments with three other language training programs, thereby validating the system's methods.

Shih says successful communication between speakers at different fluency levels is an important goal in our globally-interconnected world.

"You have to learn the language that is used in the community and there's no bias or discrimination as to what is a good or bad accent. You have to communicate with them no matter what," she said. "If you can talk to people, then think what happens: business will grow and technology will be transferred. But first you have to be able to talk to people."

Shih believes that communication between native and non-native speakers is a two-way street.

"When I speak English I have a lot of grammatical errors," she said. "For non-native people these errors don't bother them. But for native speakers this actually slows down the processing speed of the message that I am giving. Grammatical errors are an obstacle to communication that people have to work hard to compensate for.

"Certainly for second language learners our goal is to learn the language as well as possible. Whereas for first language listeners, maybe the job will be to educate them to accommodate more and then the communication will be much better."

Shih says her research shows that people of whatever age and background can learn to speak a second language fluently.

"It can be learned," she said. "Sometimes people say 'well I'm too old to learn a language.' But we think that a combination of teaching methods will really tell us how to learn to correct accents in a more efficient way. That's part of the goal for several research grants that we have."

Shih collaborates with several Beckman researchers on the topic of second language fluency, including Kay Bock, Brian Ross, Mark Hasegawa-Johnson, and Sproat.

"This is where we really want to pull the Beckman synergies together: combining expertise in speech technology, such as speech recognition and speech synthesis, psychology, linguistics, and education" Shih said. "We ask questions about what is happening in speakers' minds that causes them to say things in a more fluent way."

Shih's newest resource for discovering the answers to those questions is a cutting edge piece of equipment that was added to her lab this summer called EMA short for electromagnetic articulometer. The Carstens AG500 3-D Articulograph is the newest model of its kind and one of very few in the world being used in a linguistics lab.

Study participants place their head inside the cube-like clear plastic device, which uses sensors placed in the mouth and six transmitter coils to digitally record movements of what are called articulators (lower jaw, tip and body of the tongue and lips) during speech production. One computer records the data while another evaluates the data and manages the system, which has capabilities that aren't possible with methods such as fMRI. With the AG500 Shih said they are able to record physical movements at a high frame rate, render views not possible with traditional imaging methods, and acquire acoustic data of the speech movements. She expects it to get a lot of use by other researchers, as well as herself.

"We're very excited about it," Shih said. "We have many people who will use it. This is such a nice tool; it can be used to answer many different types of questions."

The EMA and the database are but two of the technological additions to Shih's research work that began with pencil and paper.

"There is a lot of technological background behind all of this work which is not obvious to people," Shih said. "We design a computer program to simulate good teaching and learning strategies; we write algorithms or condense concepts into algorithms. That's basically what it takes to get things to work. I am not a programmer but over the years I've learned a lot in an interdisciplinary environment like Bell Labs and Beckman.

"If things are happening in the atmosphere you pick it up. If you see a neighbor doing something you poke your head in and say 'show me how you are doing that,'" Shih said with a laugh. "Over the years you accumulate a tremendous amount."