Читать книгу The Handbook of Speech Perception - Группа авторов - Страница 38

REFERENCES

Оглавление

1 Alsius, A., Navarra, J., Campbell, R., & Soto‐Faraco, S. (2005). Audiovisual integration of speech falters under high attention demands. Current Biology, 15(9), 839–843.

2 Alsius, A., Navarra, J., & Soto‐Faraco, S. (2007). Attention to touch weakens audiovisual speech integration. Experimental Brain Research, 183(3), 399–404.

3 Alsius, A., Paré, M., & Munhall, K. G. (2017). Forty years after Hearing lips and seeing voices: The McGurk effect revisited. Multisensory Research, 31(1–2), 111–144.

4  Altieri, N., Pisoni, D. B., & Townsend, J. T. (2011). Some behavioral and neurobiological constraints on theories of audiovisual speech integration: A review and suggestions for new directions. Seeing and Perceiving, 24(6), 513–539.

5 Andersen, T. S., Tiippana, K., Laarni, J., et al. (2009). The role of visual spatial attention in audiovisual speech perception. Speech Communication, 29, 184–193.

6 Arnal, L. H., Morillon, B., Kell, C. A., & Giraud, A. L. (2009). Dual neural routing of visual facilitation in speech processing. Journal of Neuroscience, 29(43), 13445–13453.

7 Arnold, P., & Hill, F. (2001). Bisensory augmentation: A speechreading advantage when speech is clearly audible and intact. British Journal of Psychology, 92(2), 339–355.

8 Auer, E. T., Bernstein, L. E., Sungkarat, W., & Singh, M. (2007). Vibrotactile activation of the auditory cortices in deaf versus hearing adults. Neuroreport, 18(7), 645–648.

9 Barker, J. P., & Berthommier, F. (1999). Evidence of correlation between acoustic and visual features of speech. In J. J. Ohala, Y. Hasegawa, M. Ohala, et al. (Eds), Proceedings of the XIVth International Congress of Phonetic Sciences (pp. 5–9). Berkeley: University of California.

10 Barutchu, A., Crewther, S. G., Kiely, P., et al. (2008). When/b/ill with/g/ill becomes/d/ill: Evidence for a lexical effect in audiovisual speech perception. European Journal of Cognitive Psychology, 20(1), 1–11.

11 Baum, S., Martin, R. C., Hamilton, A. C., & Beauchamp, M. S. (2012). Multisensory speech perception without the left superior temporal sulcus. Neuroimage, 62(3), 1825–1832.

12 Baum, S. H., & Beauchamp, M. S. (2014). Greater BOLD variability in older compared with younger adults during audiovisual speech perception. PLOS One, 9(10), 1–10.

13 Beauchamp, M. S., Nath, A. R., & Pasalar, S. (2010). fMRI‐guided transcranial magnetic stimulation reveals that the superior temporal sulcus is a cortical locus of the McGurk effect. Journal of Neuroscience, 30(7), 2414–2417.

14 Bernstein, L. E., Auer, E. T., Jr., Eberhardt, S. P., & Jiang, J. (2013). Auditory perceptual learning for speech perception can be enhanced by audiovisual training. Frontiers in Neuroscience, 7, 1–16.

15 Bernstein, L. E., Auer, E. T., Jr., & Moore, J. K. (2004). Convergence or association? In G. A. Calvert, C. Spence, & B. E. Stein (Eds), Handbook of multisensory processes (pp. 203–220). Cambridge, MA: MIT Press.

16 Bernstein, L. E., Auer, E. T., Jr., & Takayanagi, S. (2004). Auditory speech detection in noise enhanced by lipreading. Speech Communication, 44(1–4), 5–18.

17 Bernstein, L. E., Eberhardt, S. P., & Auer, E. T. (2014). Audiovisual spoken word training can promote or impede auditory‐only perceptual learning: Prelingually deafened adults with late‐acquired cochlear implants versus normal hearing adults. Frontiers in Psychology, 5, 1–20.

18 Bernstein, L. E., Jiang, J., Pantazis, D., et al. (2011). Visual phonetic processing localized using speech and nonspeech face gestures in video and point‐light displays. Human Brain Mapping, 32(10), 1660–1676.

19 Bertelson, P., & de Gelder, B. (2004). The psychology of multi‐sensory perception. In C. Spence & J. Driver (Eds), Crossmodal space and crossmodal attention (pp. 141–177). Oxford: Oxford University Press.

20 Bertelson, P., Vroomen, J., Wiegeraad, G., & de Gelder, B. (1994). Exploring the relation between McGurk interference and ventriloquism. In Proceedings of the Third International Congress on Spoken Language Processing (pp. 559–562). Yokohama: Acoustical Society of Japan.

21 Besle, J., Fort, A., Delpuech, C., & Giard, M. H. (2004). Bimodal speech: Early suppressive visual effects in human auditory cortex. European Journal of Neuroscience, 20(8), 2225–2234.

22 Besle, J., Fischer, C., Bidet‐Caulet, A., et al. (2008). Visual activation and audiovisual interactions in the auditory cortex during speech perception: Intracranial recordings in humans. Journal of Neuroscience, 24, 14301–14310.

23 Bishop, C. W., & Miller, L. M. (2011). Speech cues contribute to audiovisual spatial integration. PLOS One, 6(8), e24016.

24 Borrie, S. A., McAuliffe, M. J., Liss, J. M., et al. (2013). The role of linguistic and indexical information in improved recognition of dysarthric speech. Journal of the Acoustical Society of America, 133(1), 474–482.

25 Brancazio, L. (2004). Lexical influences in audiovisual speech perception. Journal of Experimental Psychology: Human Perception and Performance, 30(3), 445–463.

26 Brancazio, L., Best, C. T., & Fowler, C. A. (2006). Visual influences on perception of speech and nonspeech vocal‐tract events. Language and Speech, 49(1), 21–53.

27 Brancazio, L., & Miller, J. L. (2005). Use of visual information in speech perception: Evidence for a visual rate effect both with and without a McGurk effect. Attention, Perception, & Psychophysics, 67(5), 759–769.

28 Brancazio, L., Miller, J. L., & Paré, M. A. (2003). Visual influences on the internal structure of phonetic categories. Perception & Psychophysics, 65(4), 591–601.

29 Brown, V., Hedayati, M., Zanger, A., et al. (2018). What accounts for individual differences in susceptibility to the McGurk effect? PLOS ONE, 13(11), e0207160.

30 Burnham, D., Ciocca, V., Lauw, C., et al. (2000). Perception of visual information for Cantonese tones. In M. Barlow & P. Rose (Eds), Proceedings of the Eighth Australian International Conference on Speech Science and Technology (pp. 86–91). Canberra: Australian Speech Science and Technology Association.

31 Burnham, D. K., & Dodd, B. (2004). Auditory–visual speech integration by prelinguistic infants: Perception of an emergent consonant in the McGurk effect. Developmental Psychobiology, 45(4), 204–220.

32 Callan, D. E., Callan, A. M., Kroos, C., & Eric Vatikiotis‐Bateson. (2001). Multimodal contribution to speech perception reveled by independant component analysis: A single sweep EEG case study. Cognitive Brain Research, 10(3), 349–353.

33 Callan, D. E., Jones, J. A., & Callan, A. (2014). Multisensory and modality specific processing of visual speech in different regions of the premotor cortex. Frontiers in Psychology, 5, 389.

34 Callan, D. E., Jones, J. A., Callan, A. M., & Akahane‐Yamada, R. (2004). Phonetic perceptual identification by native‐and second‐language speakers differentially activates brain regions involved with acoustic phonetic processing and those involved with articulatory–auditory/orosensory internal models. NeuroImage, 22(3), 1182–1194.

35 Callan, D. E., Jones, J. A., Munhall, K., et al. (2003). Neural processes underlying perceptual enhancement by visual speech gestures. Neuroreport, 14(17), 2213–2218.

36 Calvert, G. A, Bullmore, E. T., Brammer, M. J., et al. (1997). Activation of auditory cortex during silent lipreading. Science, 276(5312), 593–596.

37 Campbell, R. (2011). Speechreading: What’s missing. In A. Calder (Ed.), Oxford handbook of face perception (pp. 605–630). Oxford: Oxford University Press.

38 Chandrasekaran, C., Trubanova, A., Stillittano, S., et al. (2009). The natural statistics of audiovisual speech. PLOS Computational Biology, 5(7), 1–18.

39 Cienkowski, K. M., & Carney, A. E. (2002). Auditory–visual speech perception and aging, Ear and Hearing, 23, 439–449.

40 Colin, C., Radeau, M., Deltenre, P., et al. (2002). The role of sound intensity and stop‐consonant voicing on McGurk fusions and combinations. European Journal of Cognitive Psychology, 14, 475–491.

41 Connine, C. M., & Clifton, C., Jr. (1987). Interactive use of lexical information in speech perception. Journal of Experimental Psychology: Human Perception and Performance, 13(2), 291–299.

42 Danielson, D. K., Bruderer, A. G., Kandhadai, P., et al. (2017). The organization and reorganization of audiovisual speech perception in the first year of life. Cognitive Development, 42, 37–48.

43 D’Ausilio, A., Bartoli, E., Maffongelli, L., et al. (2014). Vision of tongue movements bias auditory speech perception. Neuropsychologia, 63, 85–91.

44 Delvaux, V., Huet, K., Piccaluga, M., & Harmegnies, B. (2018). The perception of anticipatory labial coarticulation by blind listeners in noise: A comparison with sighted listeners in audio‐only, visual‐only and audiovisual conditions. Journal of Phonetics, 67, 65–77.

45 Derrick, D., & Gick, B. (2013). Aerotactile integration from distal skin stimuli. Multisensory Research, 26(5), 405–416.

46 Desjardins, R. N., & Werker, J. F. (2004). Is the integration of heard and seen speech mandatory for infants? Developmental Psychobiology, 45(4), 187–203.

47 Dias, J. W., & Rosenblum, L. D. (2011). Visual influences on interactive speech alignment. Perception, 40, 1457–1466.

48 Dias, J. W., & Rosenblum, L. D. (2016). Visibility of speech articulation enhances auditory phonetic convergence. Attention, Perception, & Psychophysics, 78, 317–333.

49 Diehl, R. L., & Kluender, K. R. (1989). On the objects of speech perception. Ecological Psychology, 1, 121–144.

50 Dorsi, J., Rosenblum, L. D., Dias, J. W., & Ashkar, D. (2016). Can audio‐haptic speech be used to train better auditory speech perception? Journal of the Acoustical Society of America, 139(4), 2016–2017.

51 Dorsi, J., Rosenblum, L. D., & Ostrand, R. (2017). What you see isn’t always what you get, or is it? Reexamining semantic priming from McGurk stimuli. Poster presented at the 58th Meeting of the Psychonomics Society, Vancouver, Canada, November 10.

52 Eberhardt, S. P., Auer, E. T., & Bernstein, L. E. (2014). Multisensory training can promote or impede visual perceptual learning of speech stimuli: Visual‐tactile vs. visual‐auditory training. Frontiers in Human Neuroscience, 8, 1–23.

53 Eskelund, K., MacDonald, E. N., & Andersen, T. S. (2015). Face configuration affects speech perception: Evidence from a McGurk mismatch negativity study. Neuropsychologia, 66, 48–54.

54 Eskelund, K., Tuomainen, J., & Andersen, T. S. (2011). Multistage audiovisual integration of speech: Dissociating identification and detection. Experimental Brain Research, 208(3), 447–457.

55 Fingelkurts, A. A., Fingelkurts, A. A., Krause, C. M., et al. (2003). Cortical operational synchrony during audio–visual speech integration. Brain and Language, 85(2), 297–312.

56 Fowler, C. A. (1986). An event approach to the study of speech perception from a direct‐realist perspective. Journal of Phonetics, 14, 3–28.

57 Fowler, C. A. (2004). Speech as a supramodal or amodal phenomenon. In G. Calvert, C. Spence, & B. E. Stein (Eds), Handbook of multisensory processes (pp. 189–201). Cambridge, MA: MIT Press.

58  Fowler, C. A. (2010). Embodied, embedded language use. Ecological Psychology, 22(4), 286–303.

59 Fowler, C. A., Brown, J. M., & Mann, V. A. (2000). Contrast effects do not underlie effects of preceding liquids on stop‐consonant identification by humans. Journal of Experimental Psychology: Human Perception and Performance, 26(3), 877–888.

60 Fowler, C. A., & Dekle, D. J. (1991). Listening with eye and hand: Cross‐modal contributions to speech perception. Journal of Experimental Psychology: Human Perception and Performance, 17(3), 816–828.

61 Fuster‐Duran, A. (1996). Perception of conflicting audio‐visual speech: An examination across Spanish and German. In D. G. Stork & M. E. Hennecke (Eds), Speechreading by humans and machines (pp. 135–143). Berlin: Springer.

62 Ganong, W. F. (1980). Phonetic categorization in auditory word perception. Journal of Experimental Psychology: Human Perception and Performance, 6(1), 110–125.

63 Gentilucci, M., & Cattaneo, L. (2005). Automatic audiovisual integration in speech perception. Experimental Brain Research, 167(1), 66–75.

64 Ghazanfar, A. A., Maier, J. X., Hoffman, K. L., & Logothetis, N. K. (2005). Multisensory integration of dynamic faces and voices in rhesus monkey auditory cortex. Journal of Neuroscience, 25(20), 5004–5012.

65 Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin.

66 Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin.

67 Gick, B., & Derrick, D. (2009). Aero‐tactile integration in speech perception. Nature, 462(7272), 502–504.

68 Gick, B., Jóhannsdóttir, K. M., Gibraiel, D., & Mühlbauer, J. (2008). Tactile enhancement of auditory and visual speech perception in untrained perceivers. Journal of the Acoustical Society of America, 123(4), 72–76.

69 Gordon, P. C. (1997). Coherence masking protection in speech sounds: The role of formant synchrony. Perception & Psychophysics, 59, 232–242.

70 Grant, K. W. (2001). The effect of speechreading on masked detection thresholds for filtered speech. Journal of the Acoustical Society of America, 109(5), 2272–2275.

71 Grant, K. W., & Seitz, P. F. (1998). Measures of auditory‐visual integration in nonsense syllables and sentences. Journal of the Acoustical Society of America, 104, 2438–2450.

72 Grant, K. W., & Seitz, P. F. P. (2000). The use of visible speech cues for improving auditory detection of spoken sentences. Journal of the Acoustical Society of America, 108(3), 1197–1208.

73 Green, K. P., & Gerdeman, A. (1995). Cross‐modal discrepancies in coarticulation and the integration of speech information: The McGurk effect with mismatched vowel. Journal of Experimental Psychology: Human Perception and Performance, 21, 1409–1426.

74 Green, K. P., & Kuhl, P. K. (1989). The role of visual information in the processing of. Perception & Psychophysics, 45(1), 34–42.

75 Green, K. P., & Kuhl, P. K. (1991). Integral processing of visual place and auditory voicing information during phonetic perception. Journal of Experimental Psychology: Human Perception and Performance, 17, 278–288.

76 Green, K. P., Kuhl, P. K., Meltzoff, A. N., & Stevens, E. B. (1991). Integrating speech information across talkers, gender, and sensory modality: Female faces and male voices in the McGurk effect. Perception & Psychophysics, 50(6), 524–536.

77 Green, K. P., & Miller, J. L. (1985). On the role of visual rate information in phonetic perception. Perception & Psychophysics, 38(3), 269–276.

78 Green, K. P., & Norrix, L. W. (2001). Perception of /r/ and /l/ in a stop cluster: Evidence of cross‐modal context effects. Journal of Experimental Psychology: Human Perception and Performance, 27(1), 166–177.

79 Hall, D. A., Fussell, C., & Summerfield, A. Q. (2005). Reading fluent speech from talking faces: Typical brain networks and individual differences. Journal of Cognitive Neuroscience, 17(6), 939–953.

80 Han, Y., Goudbeek, M., Mos, M., & Swerts, M. (2018). Effects of modality and speaking style on Mandarin tone identification by non‐native listeners. Phonetica, 76(4), 263–286.

81 Hardison, D. M (2005). Variability in bimodal spoken language processing by native and nonnative speakers of English: A closer look at effects of speech style. Speech Communication, 46, 73–93.

82 Hazan, V., Sennema, A., Iba, M., & Faulkner, A. (2005). Effect of audiovisual perceptual training on the perception and production of consonants by Japanese learners of English. Speech Communication, 47(3), 360–378.

83 Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. (2009). Time course of early audiovisual interactions during speech and nonspeech central auditory processing: A magnetoencephalography study. Journal of Cognitive Neuroscience, 21(2), 259–274.

84 Hessler, D., Jonkers, R., Stowe, L., & Bastiaanse, R. (2013). The whole is more than the sum of its parts: Audiovisual processing of phonemes investigated with ERPs. Brain Language, 124, 213–224.

85 Hickok, G. (2009). Eight problems for the mirror neuron theory of action understanding in monkeys and humans. Journal of Cognitive Neuroscience, 21(7), 1229–1243.

86 Irwin, J., & DiBlasi, L. (2017). Audiovisual speech perception: A new approach and implications for clinical populations. Language and Linguistics Compass, 11(3), 77–91.

87 Irwin, J. R., Frost, S. J., Mencl, W. E., et al. (2011). Functional activation for imitation of seen and heard speech. Journal of Neurolinguistics, 24(6), 611–618.

88 Ito, T., Tiede, M., & Ostry, D. J. (2009). Somatosensory function in speech perception. Proceedings of the National Academy of Sciences of the United States of America, 106(4), 1245–1248.

89 Jerger, S., Damian, M. F., Tye‐Murray, N., & Abdi, H. (2014). Children use visual speech to compensate for non‐intact auditory speech. Journal of Experimental Child Psychology, 126, 295–312.

90 Jerger, S., Damian, M. F., Tye‐Murray, N., & Abdi, H. (2017). Children perceive speech onsets by ear and eye. Journal of Child Language, 44(1), 185–215.

91 Jesse, A., & Bartoli, M. (2018). Learning to recognize unfamiliar talkers: Listeners rapidly form representations of facial dynamic signatures. Cognition, 176, 195–208.

92 Jiang, J., Alwan, A., Keating, P., et al. (2002). On the relationship between facial movements, tongue movements, and speech acoustics. EURASIP Journal on Applied Signal Processing, 11, 1174–1178.

93 Jiang, J., Auer, E. T., Alwan, A., et al. (2007). Similarity structure in visual speech perception and optical phonetic signals. Perception & Psychophysics, 69(7), 1070–1083.

94 Katz, W. F., & Mehta, S. (2015). Visual feedback of tongue movement for novel speech sound learning. Frontiers in Human Neuroscience, 9, 612.

95 Kawase, T., Sakamoto, S., Hori, Y., et al. (2009). Bimodal audio–visual training enhances auditory adaptation process. NeuroReport, 20, 1231–1234.

96 Kim, J., & Davis, C. (2004). Investigating the audio–visual speech detection advantage. Speech Communication, 44(1), 19–30.

97  Lachs, L., & Pisoni, D. B. (2004). Specification of cross‐modal source information in isolated kinematic displays of speech. Journal of the Acoustical Society of America, 116(1), 507–518.

98 Lander, K. & Davies, R. (2008). Does face familiarity influence speechreadability? Quarterly Journal of Experimental Psychology, 61, 961–967.

99 Lidestam, B., Moradi, S., Pettersson, R., & Ricklefs, T. (2014). Audiovisual training is better than auditory‐only training for auditory‐only speech‐in‐noise identification. Journal of the Acoustical Society of America, 136(2), EL142–147.

100 Ma, W. J., Zhou, X., Ross, L. A., et al. (2009). Lip‐reading aids word recognition most in moderate noise: A Bayesian explanation using high‐dimensional feature space. PLOS ONE, 4(3), 1–14.

101 Magnotti, J. F., & Beauchamp, M. S. (2017). A causal inference model explains perception of the McGurk effect and other incongruent audiovisual speech. PLOS Computational Biology, 2017( 13), e1005229.

102 Massaro, D. W. (1987). Speech perception by ear and eye: A paradigm for psychological inquiry. Hillsdale, NJ: Lawrence Erlbaum.

103 Massaro, D. W., Cohen, M. M., Gesi, A., et al. (1993). Bimodal speech perception: An examination across languages. Journal of Phonetics, 21, 445–478.

104 Massaro, D. W., & Ferguson, E. L. (1993). Cognitive style and perception: The relationship between category width and speech perception, categorization, and discrimination. American Journal of Psychology, 106(1), 25–49.

105 Massaro, D. W., Thompson, L. A., Barron, B., & Laron, E. (1986). Developmental changes in visual and auditory contributions to speech perception, Journal of Experimental Child Psychology, 41, 93–113.

106 Matchin, W., Groulx, K., & Hickok, G. (2014). Audiovisual speech integration does not rely on the motor system: Evidence from articulatory suppression, the McGurk effect, and fMRI. Journal of Cognitive Neuroscience, 26(3), 606–620.

107 McGurk, H., & MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 264, 746–748.

108 Ménard, L., Cathiard, M. A., Troille, E., & Giroux, M. (2015). Effects of congenital visual deprivation on the auditory perception of anticipatory labial coarticulation. Folia Phoniatrica et Logopaedica, 67(2), 83–89.

109 Ménard, L., Dupont, S., Baum, S. R., & Aubin, J. (2009). Production and perception of French vowels by congenitally blind adults and sighted adults. Journal of the Acoustical Society of America, 126(3), 1406–1414.

110 Ménard, L., Leclerc, A., & Tiede, M. (2014). Articulatory and acoustic correlates of contrastive focus in congenitally blind adults and sighted adults. Journal of Speech, Language, and Hearing Research, 57(3), 793–804.

111 Ménard, L., Toupin, C., Baum, S. R., et al. (2013). Acoustic and articulatory analysis of French vowels produced by congenitally blind adults and sighted adults. Journal of the Acoustical Society of America, 134(4), 2975–2987.

112 Miller, B. T., & D’Esposito, M. (2005). Searching for “the top” in top‐down control. Neuron, 48(4), 535–538.

113 Miller, R., Sanchez, K., & Rosenblum, L. (2010). Alignment to visual speech information. Attention, Perception, & Psychophysics, 72(6), 1614–1625.

114 Mitterer, H., & Reinisch, E. (2017). Visual speech influences speech perception immediately but not automatically. Attention, Perception, & Psychophysics, 79(2), 660–678.

115 Moradi, S., Lidestam, B., Ng, E. H. N., et al. (2019). Perceptual doping: An audiovisual facilitation effect on auditory speech processing, from phonetic feature extraction to sentence identification in noise. Ear and Hearing, 40(2), 312–327.

116  Munhall, K. G., Ten Hove, M. W., Brammer, M., & Paré, M. (2009). Audiovisual integration of speech in a bistable illusion. Current Biology, 19(9), 735–739.

117 Munhall, K. G., & Vatikiotis‐Bateson, E. (2004). Spatial and temporal constraints on audiovisual speech perception. In G. A. Calvert, C. Spence, & B. E. Stein (Eds), Handbook of multisensory processes (pp. 177–188) Cambridge, MA: MIT Press.

118 Münte, T. F., Stadler, J., Tempelmann, C., & Szycik, G. R. (2012). Examining the McGurk illusion using high‐field 7 Tesla functional MRI. Frontiers in Human Neuroscience, 6, 95.

119 Musacchia, G., Sams, M., Nicol, T., & Kraus, N. (2006). Seeing speech affects acoustic information processing in the human brainstem. Experimental Brain Research, 168(1–2), 1–10.

120 Namasivayam, A. K., Wong, W. Y. S., Sharma, D., & van Lieshout, P. (2015). Visual speech gestures modulate efferent auditory system. Journal of Integrative Neuroscience, 14(1), 73– 83.

121 Nath, A. R., & Beauchamp M. S. (2012). A neural basis for interindividual differences in the McGurk effect: A multisensory speech illusion. NeuroImage, 59(1), 781–787. PubMed.

122 Navarra, J., & Soto‐Faraco, S. (2007). Hearing lips in a second language: Visual articulatory information enables the perception of second language sounds. Psychological Research, 71, 4–12.

123 Nishitani, N., & Hari, R. (2002). Viewing lip forms: Cortical dynamics. Neuron, 36(6), 1211–1220.

124 Nygaard, L. C. (2005). The integration of linguistic and non‐linguistic properties of speech. In D. Pisoni & R. Remez (Eds), Handbook of speech perception (pp. 390–414). Oxford: Blackwell.

125 Olson, I. R., Gatenby, J., & Gore, J. C. (2002). A comparison of bound and unbound audio–visual information processing in the human cerebral cortex. Cognitive Brain Research, 14, 129–138.

126 Ostrand, R., Blumstein, S. E., Ferreira, V. S., & Morgan, J. L. (2016). What you see isn’t always what you get: Auditory word signals trump consciously perceived words in lexical access. Cognition, 151, 96–107.

127 Palmer, T. D., & Ramsey, A. K. (2012). The function of consciousness in multisensory integration. Cognition, 125(3), 353–364.

128 Papale, P., Chiesi, L., Rampinini, A. C., et al. (2016). When neuroscience “touches” architecture: From hapticity to a supramodal functioning of the human brain. Frontiers in Psychology, 7(631), 866.

129 Pardo, J. S. (2006). On phonetic convergence during conversational interaction. Journal of the Acoustical Society of America, 119(4), 2382–2393.

130 Pardo, J. S., Gibbons, R., Suppes, A., & Krauss, R. M. (2012). Phonetic convergence in college roommates. Journal of Phonetics, 40(1), 190–197.

131 Pardo, J. S., Jordan, K., Mallari, R., et al. (2013). Phonetic convergence in shadowed speech: The relation between acoustic and perceptual measures. Journal of Memory and Language, 69(3), 183–195.

132 Pardo, J. S., Urmanche, A., Wilman, S., & Wiener, J. (2017). Phonetic convergence across multiple measures and model talkers. Attention, Perception, & Psychophysics, 79(2), 637–659.

133 Pascual‐Leone, A., & Hamilton, R. (2001). The metamodal organization of the brain. Progress in Brain Research, 134, 427–445.

134 Paulesu, E., Perani, D., Blasi, V., et al. (2003). A functional‐anatomical model for lipreading. Journal of Neurophysiology, 90(3), 2005–2013.

135 Pekkola, J., Ojanen, V., Autti, T., et al. (2005). Primary auditory cortex activation by visual speech: An fMRI study at 3 T. Neuroreport, 16(2), 125–128.

136 Pilling, M., & Thomas, S. (2011). Audiovisual cues and perceptual learning of spectrally distorted speech. Language and Speech, 54(4), 487–497.

137 Plass, J., Guzman‐Martinez, E., Ortega, L., et al. (2014). Lip reading without awareness. Psychological Science, 25(9), 1835–1837.

138 Reich, L., Maidenbaum, S., & Amedi, A. (2012). The brain as a flexible task machine: Implications for visual rehabilitation using noninvasive vs. invasive approaches. Current Opinion in Neurobiology, 25(1), 86–95.

139 Reisberg, D., McLean, J., & Goldfield, A. (1987). Easy to hear but hard to understand: A lip‐reading advantage with intact auditory stimuli. In B. Dodd & R. Campbell (Eds), Hearing by eye: The psychology of lip‐reading (pp. 97–113). Hillsdale, NJ: Lawrence Erlbaum.

140 Remez, R. E., Beltrone, L. H., & Willimetz, A. A. (2017). Effects of intrinsic temporal distortion on the multimodal perceptual organization of speech. Paper presented at the 58th Annual Meeting of the Psychonomic Society, Vancouver, British Columbia, November.

141 Remez, R. E., Fellowes, J. M., & Rubin, P. E. (1997). Talker identification based on phonetic information. Journal of Experimental Psychology: Human Perception and Performance, 23(3), 651–666.

142 Ricciardi, E., Bonino, D., Pellegrini, S., & Pietrini, P. (2014). Mind the blind brain to understand the sighted one! Is there a supramodal cortical functional architecture? Neuroscience & Biobehavioral Reviews, 41, 64–77.

143 Riedel, P., Ragert, P., Schelinski, S., et al. (2015). Visual face‐movement sensitive cortex is relevant for auditory‐only speech recognition. Cortex, 68, 86–99.

144 Rosen, S. M., Fourcin, A. J., & Moore, B. C. (1981). Voice pitch as an aid to lipreading. Nature, 291(5811), 150.

145 Rosenblum, L. D. (2005). Primacy of multimodal speech perception. In D. Pisoni & R. Remez (Eds), Handbook of speech perception (pp. 51–78). Oxford: Blackwell.

146 Rosenblum, L. D. (2008). Speech perception as a multimodal phenomenon. Current Directions in Psychological Science, 17(6), 405–409.

147 Rosenblum, L. D. (2013). A confederacy of senses. Scientific American, 308, 72–75.

148 Rosenblum, L. D. (2019). Audiovisual speech perception and the McGurk effect. In Oxford research encyclopedia of linguistics. https://oxfordre.com/linguistics/view/10.1093/acrefore/9780199384655.001.0001/acrefore‐9780199384655‐e‐420?rskey=L7JvON&result=1

149 Rosenblum, L. D., Dias, J. W., & Dorsi, J. (2017). The supramodal brain: Implications for auditory perception. Journal of Cognitive Psychology, 5911, 1–23.

150 Rosenblum, L. D., Dorsi, J., & Dias, J. W. (2016). The impact and status of Carol Fowler’s supramodal theory of multisensory speech perception. Ecological Psychology, 28, 262–294.

151 Rosenblum, L. D., Miller, R. M., & Sanchez, K. (2007). Lip‐read me now, hear me better later: Cross‐modal transfer of talker‐familiarity effects. Psychological Science, 18(5), 392–396.

152 Rosenblum, L. D., & Saldaña, H. M. (1992). Discrimination tests of visually‐influenced syllables. Perception & Psychophysics, 52(4), 461–473.

153 Rosenblum, L. D., & Saldaña, H. M. (1996). An audiovisual test of kinematic primitives for visual speech perception. Journal of Experimental Psychology: Human Perception and Performance, 22(2), 318–331.

154 Rosenblum, L. D., Schmuckler, M. A., & Johnson, J. A. (1997). The McGurk effect in infants. Perception & Psychophysics, 59(3), 347–357.

155 Rosenblum, L. D., Yakel, D. A., Baseer, N., et al. (2002). Visual speech information for face recognition. Perception & Psychophysics, 64(2), 220–229.

156  Rosenblum, L. D., Yakel, D. A., & Green, K. G. (2000). Face and mouth inversion affects on visual and audiovisual speech perception. Journal of Experimental Psychology: Human Perception and Performance, 26(3), 806–819.

157 Sams, M., Manninen, P., Surakka, V., et al. (1998). McGurk effect in Finnish syllables, isolated words, and words in sentences: Effects of word meaning and sentence context. Speech Communication, 26(1–2), 75–87.

158 Sanchez, K., Dias, J. W., & Rosenblum, L. D. (2013). Experience with a talker can transfer across modalities to facilitate lipreading. Attention, Perception & Psychophysics, 75, 1359–1365.

159 Sanchez, K., Miller, R. M., & Rosenblum, L. D. (2010). Visual influences on alignment to voice onset time. Journal of Speech, Language, and Hearing Research, 53, 262–272.

160 Santi, A., Servos, P., Vatikiotis‐Bateson, E., et al. (2003). Perceiving biological motion: Dissociating visible speech from walking. Journal of Cognitive Neuroscience, 15(6), 800–809.

161 Sato, M., Buccino, G., Gentilucci, M., & Cattaneo, L. (2010). On the tip of the tongue: Modulation of the primary motor cortex during audiovisual speech perception. Speech Communication, 52(6), 533–541.

162 Sato, M., Cavé, C., Ménard, L., & Brasseur, A. (2010). Auditory‐tactile speech perception in congenitally blind and sighted adults. Neuropsychologia, 48(12), 3683–3686.

163 Schall, S., & von Kriegstein, K. (2014). Functional connectivity between face‐movement and speech‐intelligibility areas during auditory‐only speech perception. PLOS ONE, 9(1), 1–11.

164 Schelinski, S., Riedel, P., & von Kriegstein, K. (2014). Visual abilities are important for auditory‐only speech recognition: Evidence from autism spectrum disorder. Neuropsychologia, 65, 1–11.

165 Schwartz, J. L., Berthommier, F., & Savariaux, C. (2004). Seeing to hear better: Evidence for early audio‐visual interactions in speech identification. Cognition, 93(2), B69–78.

166 Schweinberger, S. R., & Soukup, G. R. (1998). Asymmetric relationships among perceptions of facial identity, emotion, and facial speech. Journal of Experimental Psychology: Human Perception and Performance, 24, 1748–1765.

167 Sekiyama, K., & Tohkura, Y. (1991). McGurk effect in non‐English listeners: Few visual effects for Japanese subjects hearing Japanese syllables of high auditory intelligibility. Journal of the Acoustical Society of America, 90(4), 1797–1805.

168 Sekiyama, K., & Tohkura, Y. (1993). Inter‐language differences in the influence of visual cues in speech perception. Journal of Phonetics, 21(4), 427–444.

169 Shams, L., Iwaki, S., Chawla, A., & Bhattacharya, J. (2005). Early modulation of visual cortex by sound: An MEG study. Neuroscience Letters, 378(2), 76–81.

170 Shams, L., Wozny, D. R., Kim, R., & Seitz, A. (2011). Influences of multisensory experience on subsequent unisensory processing. Frontiers in Psychology, 2, 264.

171 Shahin, A. J., Backer, K. C., Rosenblum, L. D., & Kerlin, J. R. (2018). Neural mechanisms underlying cross‐modal phonetic encoding. Journal of Neuroscience, 38(7), 1835–1849.

172 Sheffert, S. M., Pisoni, D. B., Fellowes, J. M., & Remez, R. E. (2002). Learning to recognize talkers from natural, sinewave, and reversed speech samples. Journal of Experimental Psychology: Human Perception and Performance, 28(6), 1447–1469.

173 Simmons, D. C., Dias, J. W., Dorsi, J. & Rosenblum, L. D. (2015). Crossmodal transfer of talker learning. Poster presented at the 169th meeting of the Acoustical Society of America, Pittsburg, Pennsylvania, May.

174 Skipper, J. I., Nusbaum, H. C., & Small, S. L. (2005). Listening to talking faces: Motor cortical activation during speech perception. NeuroImage, 25(1), 76–89.

175 Skipper, J. I., van Wassenhove, V., Nusbaum, H. C., & Small, S. L. (2007). Hearing lips and seeing voices: How cortical areas supporting speech production mediate audiovisual speech perception. Cerebral Cortex, 17(10), 2387–2399.

176 Soto‐Faraco, S., & Alsius, A. (2007). Conscious access to the unisensory components of a crossmodal illusion. NeuroReport, 18, 347–350.

177 Soto‐Faraco, S., & Alsius, A. (2009). Deconstructing the McGurk–MacDonald illusion. Journal of Experimental Psychology: Human Perception and Performance, 35, 580–587.

178 Stoffregen, T. A., & Bardy, B. G. (2001). On specification and the senses. Behavioral and Brain Sciences, 24(2), 195–213.

179 Strand, J., Cooperman, A., Rowe, J., & Simenstad, A. (2014). Individual differences in susceptibility to the McGurk effect: Links with lipreading and detecting audiovisual incongruity, Journal of Speech, Language, and Hearing Research, 57, 2322–2331.

180 Striem‐Amit, E., Dakwar, O., Hertz, U., et al. (2011). The neural network of sensory‐substitution object shape recognition. Functional Neurology, Rehabilitation, and Ergonomics, 1(2), 271–278.

181 Sumby, W. H., & Pollack, I. (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America, 26(2), 212–215.

182 Summerfield, Q. (1987). Some preliminaries to a comprehensive account of audio‐visual speech perception. In B. Dodd & R. Campbell (Eds), Hearing by eye: The psychology of lip‐reading (pp. 53–83). London: Lawrence Erlbaum.

183 Summerfield, Q., & McGrath, M. (1984). Detection and resolution of audiovisual incompatibility in the perception of vowels. Quarterly Journal of Experimental Psychology, 36A, 51–74.

184 Sundara, M., Namasivayam, A. K., & Chen, R. (2001). Observation‐execution matching system for speech: A magnetic stimulation study. NeuroReport, 12(7), 1341–1344.

185 Swaminathan, S., MacSweeney, M., Boyles, R., et al. (2013). Motor excitability during visual perception of known and unknown spoken languages. Brain and Language, 126(1), 1–7.

186 Teinonen, T., Aslin, R. N., Alku, P., & Csibra, G. (2008). Visual speech contributes to phonetic learning in 6‐month‐old infants. Cognition, 108(3), 850–855.

187 Thomas, S. M., & Jordan, T. R. (2002). Determining the influence of Gaussian blurring on inversion effects with talking faces. Perception & Psychophysics, 64, 932–944.

188 Tiippana, K. (2014). What is the McGurk effect? Frontiers in Psychology, 5, 725–728.

189 Tiippana, K., Andersen, T. S., & Sams, M. (2004). Visual attention modulates audiovisual speech perception. European Journal of Cognitive Psychology, 16(3), 457–472.

190 Treille, A., Cordeboeuf, C., Vilain, C., & Sato, M. (2014). Haptic and visual information speed up the neural processing of auditory speech in live dyadic interactions. Neuropsychologia, 57(1), 71–77.

191 Treille, A., Vilain, C., & Sato, M. (2014).The sound of your lips: Electrophysiological cross‐modal interactions during hand‐to‐face and face‐to‐face speech perception. Frontiers in Psychology, 5, 1–8.

192  Turner, T. H., Fridriksson, J., Baker, J., et al. (2009). Obligatory Broca’s area modulation associated with passive speech perception. Neuroreport, 20(5), 492–496.

193 Uno, T., Kawai, K., Sakai, K., et al. (2015). Dissociated roles of the inferior frontal gyrus and superior temporal sulcus in audiovisual processing: Top‐down and bottom‐up mismatch detection. PLOS ONE, 10(3).

194 van de Rijt, L. P. H., van Opstal, A. J., Mylanus, E. A. M., et al. (2016). Temporal cortex activation to audiovisual speech in normal‐hearing and cochlear implant users measured with functional near‐infrared spectroscopy. Frontiers in Human Neuroscience, 10, 1–14.

195 Van Engen, K. J., Xie, Z., & Chandrasekaran, B. (2016). Audiovisual sentence recognition is not predicted by susceptibility to the McGurk effect. Attention, Perception, &Psychophysics, 79, 396– 403.

196 van Wassenhove, V. (2013). Speech through ears and eyes: Interfacing the senses with the supramodal brain. Frontiers in Psychology, 4, 1–17.

197 van Wassenhove, V., Grant, K. W., & Poeppel, D. (2007). Temporal window of integration in auditory‐visual speech perception. Neuropsychologia, 45(3), 598–607.

198 van Wassenhove, V., Grant, K. W., & Poeppel, D. (2005). Visual speech speeds up the neural processing of auditory speech. Proceedings of the National Academy of Sciences of the United States of America, 102(4), 1181–1186.

199 Venezia, J. H., Fillmore, P., Matchin, W., et al. (2016). Perception drives production across sensory modalities: A network for sensorimotor integration of visual speech. NeuroImage, 126, 196–207.

200 Venezia, J. H., Thurman, S. M., Matchin, W., et al. (2016). Timing in audiovisual speech perception: A mini review and new psychophysical data. Attention, Perception & Psychophysics, 78(2), 583–601.

201 von Kriegstein, K., Dogan, O., Grüter, M., et al. (2008). Simulation of talking faces in the human brain improves auditory speech recognition. Proceedings of the National Academy of Sciences of the United States of America, 105(18), 6747–6752.

202 von Kriegstein, K., & Giraud, A. L. (2006). Implicit multisensory associations influence voice recognition. PLOS Biology, 4(10), 1809–1820.

203 von Kriegstein, K., Kleinschmidt, A., Sterzer, P., & Giraud, A. L. (2005). Interaction of face and voice areas during speaker recognition. Journal of Cognitive Neuroscience, 17(3), 367–376.

204 Watkins, S., Shams, L., Tanaka, S., et al. (2006). Sound alters activity in human V1 in association with illusory visual perception. NeuroImage, 31(3), 1247–1256.

205 Wayne, R. V., & Johnsrude, I. S. (2012). The role of visual speech information in supporting perceptual learning of degraded speech. Journal of Experimental Psychology: Applied, 18(4), 419–435.

206 Wilson, A., Alsius, A., Paré, M., & Munhall, K. (2016). Spatial frequency requirements and gaze strategy in visual‐only and audiovisual speech perception, Journal of Speech, Language, and Hearing Research, 59, 601–615.

207 Windmann, S. (2004). Effects of sentence context and expectation on the McGurk illusion. Journal of Memory and Language, 50(2), 212–230.

208 Windmann, S. (2007). Sentence context induces lexical bias in audiovisual speech perception. Review of Psychology, 14(2), 77–91.

209 Yakel, D. A., Rosenblum, L. D., & Fortier, M. A. (2000). Effects of talker variability on speechreading. Perception & Psychophysics, 62, 1405–1412.

210 Yamamoto, E., Nakamura, S., & Shikano, K. (1998). Lip movement synthesis from speech based on hidden Markov models. Speech Communication, 26(1–2), 105–115.

211 Yehia, H. C., Kuratate, T., & Vatikiotis‐Bateson, E. (2002). Linking facial animation, head motion, and speech acoustics. Journal of Phonetics, 30(3), 555–568.

212 Yehia, H., Rubin, P., & Vatikiotis‐Bateson, E. (1998). Quantitative association of vocal‐tract and facial behavior. Speech Communication, 26(1–2), 23–43.

213 Zheng, Y., & Samuel, A. G. (2019). How much do visual cues help listeners in perceiving accented speech? Applied Psycholinguistics, 40(1), 93–109.

214 Zilber, N., Ciuciu, P., Gramfort, A., et al. (2014). Supramodal processing optimizes visual perceptual learning and plasticity. NeuroImage, 93, 32–46.

The Handbook of Speech Perception

Подняться наверх