Читать книгу The Handbook of Speech Perception - Группа авторов - Страница 30

2 Primacy of Multimodal Speech Perception for the Brain and Science

Оглавление

LAWRENCE D. ROSENBLUM AND JOSH DORSI

University of California, Riverside, United States

It may be argued that multimodal speech perception has become one of the most studied topics in all of cognitive psychology. A keyword search for “multimodal speech” in Google Scholar shows that, since early 2005, over 192,000 papers citing the topic have been published. Since that time, the seminal published study on audiovisual speech: McGurk & MacDonald (1976) has been cited in publications over 4,700 times (Google Scholar search). There are likely many reasons for this explosion in multisensory speech research. Perhaps most importantly, this research has helped usher in a new understanding of the perceptual brain.

In what has been termed the “multisensory revolution” (e.g. Rosenblum, 2013), research is now showing that brain areas and perceptual behaviors, long thought to be related to a single sense, are now known to be modulated by multiple senses (e.g. Pascual‐Leone & Hamilton, 2001; Reich, Maidenbaum, & Amedi, 2012; Ricciardi et al., 2014; Rosenblum, Dias, & Dorsi, 2016; Striem‐Amit et al., 2011). This research suggests a degree of neurophysiological and behavioral flexibility with perceptual modality not previously known. The research has been extensively reviewed elsewhere and will not be rehashed here (e.g. Pascual‐Leone & Hamilton, 2001; Reich, Maidenbaum, & Amedi, 2012; Ricciardi et al., 2014; Rosenblum, Dias, & Dorsi, 2016; Striem‐Amit et al., 2011). It is relevant, however, that research on audiovisual speech perception has spearheaded this revolution. Certainly, the phenomenological power of the McGurk effect has motivated research into the apparent automaticity with which the senses integrate/merge. Speech also provided the first example of a stimulus that could modulate an area in the human brain that was thought to be solely responsible for another sense. In that original report, Calvert and her colleagues (1997) showed that lip‐reading of a silent face could induce activity in the auditory cortex. Since the publication of that seminal study, hundreds of other studies have shown that visible speech can induce cross‐sensory modulation of the human auditory cortex. More generally, thousands of studies have now demonstrated crossmodal modulation of primary and secondary sensory cortexes in humans (for a review, see Rosenblum, Dias, & Dorsi, 2016). These studies have led to a new conception of the brain as a multisensory processing organ, rather than as a collection of separate sensory processing units.

This chapter will readdress important issues in multisensory speech perception in light of the enormous amount of relevant research conducted since publication of the first version of this chapter (Rosenblum, 2005). Many of the same topics addressed in that chapter will be addressed here including: (1) the ubiquity and automaticity of multisensory speech in human behavior; (2) the stage at which the speech streams integrate; and (3) the possibility that perception involves detection of a modality‐neutral – or supramodal – form of information that is available in multiple streams.

The Handbook of Speech Perception

Подняться наверх