If you’ve ever done pronunciation drilling with your hand cupped behind your ear like a demented kids TV presenter (all together now!) you’re probably of the opinion that exaggerating audio cues helps students to discriminate the sounds of a second language. But did you know that students can improve their chances of learning sounds by simply looking at a speaker’s face?
Research on how language learners use visual cues in the perception and production of sounds supports what auditory and visual sciences have known for some time: what you see has a strong impact on what you hear. This relationship is niftily illustrated through the “McGurk effect”, not the latest McDonald’s menu, but a rather compelling example of how the human sensory experience is interlinked.
More recently, with the help of fMRI scans (which basically involve sending people down a tube into a giant washing machine and having a nosy around at changes in cerebral blood flow) neuroscientists have found that the auditory cortex is active while watching silent speech (Calvert et al. 1997). In other words, the part of the brain that usually deals with sound is affected by visual information, which helps native English speakers to identify sounds in face-to-face communication. The good news is, it seems that visual cues can be interpreted by non-native speakers too.
Studies suggest that audiovisual training, in which students are shown a video of the speaker’s face, is more effective than listening practice alone in helping students to differentiate sounds (e.g. Hazan et al. 2005). Unsurprisingly, this technique has been shown to be most effective in sounds where the difference in mouth position is highly visible, such as b and v (a saving grace for Spanish students and their English bowel problems). However, audiovisual training has also produced positive results for problem areas with more subtle physical differences, for example in the differentiation of l and r for Japanese students. What’s more, these studies show a positive impact on pronunciation, indicating that increased attention to native speakers’ mouth movements enables students to reproduce the sounds more effectively themselves.
Most teachers agree that out of the big four (speaking, listening, reading and writing), it’s listening which causes students a real pain in the auricle. Given that the majority of real world listening takes place face-to-face, excluding visual components a priori is at odds with what we know about natural speech processing. In light of studies which point to the advantages of visual training, it’s time to question the dominance of audio-only files in the classroom.
- Do you do any exercises which draw students’ attention to the speaker’s mouth?
- How do you think this research could be applied in classrooms which don’t have video technology?
- Can you think of any practical ways to help students to pay attention to visual cues?
Calvert, G., Bullmore, E., Brammer, M., Campbell, R., Williams, S., McGuire, P., Woodruff, P., Iversen, S and David, A. (1997) Activation of Auditory Cortex During Silent Lipreading. Science 25: 276
Hazan, V., Sennema, A., Midori, I., Faulkner, A. (2005) Effect of audiovisual perceptual training on the perception and production of consonants by Japanese learners of English. Speech Communication 47: 360