Читать книгу The Handbook of Language and Speech Disorders - Группа авторов - Страница 48
3.3.4 Word Recognition
ОглавлениеIn contrast to the identification of individual segments, the identification of words is more often observed to be at ceiling. Moreover, there is a longitudinal effect of improvements in overall performance whereby ceiling‐level performance is reached more often in recent studies than in past studies. This was noted by Blamey et al. (2013) who compared results from their retrospective, multi‐center study of 2,251 postlingually deafened adult CI recipients to results from their study of a similar cohort from 15 years earlier (Blamey et al., 1996). They attributed this overall improvement to better device characteristics, advances in speech processing strategies, and clinical focus on preservation of residual hearing after implantation. It was also noted that the intervening years had seen changes in candidacy criteria whereby they were less restrictive. However, despite more instances of ceiling performance and improved results, there is still considerable variation in word recognition by cohorts of CI listeners. For instance, Holden et al. (2013) tested 114 postlingually deafened adults on the identification of CNC (Consonant‐Nucleus‐Consonant) words longitudinally over 2 years. Outcomes at 2 weeks after CI activation (CNC Initial) were compared with a score corresponding to the asymptotic score to which performance converged over 2 years (CNC Final). CNC Initial ranged from 0 to 73.6% and CNC Final ranged from 2.9 to 89.3%. Correlational statistics showed that, of a large number of biographical, audiological, cognitive and device‐related variables that were included, the most important ones explaining this variation were duration of hearing loss, CI sound‐field threshold levels, the percentage of electrodes in the scala vestibuli, age at implantation, and cognitive functioning, such as working memory.
Moberly, Lowenstein, and Nittrouer (2016) tested a variety of tasks including “perceptual sensitivity” (labeling “cop” vs. “cob” and “sa” vs. “sha” based on durational or spectral cues), “perceptual attention” (discriminating quasi‐syllabic sinusoidal glides based on duration or “spectral” cues), and word recognition (CID [Central Institute for the Deaf]‐22 word lists; Hirsh et al. 1952) in 30 postlingually deafened adult CI recipients and 20 NH controls. The two groups had similar perceptual sensitivity and attention for duration cues, but the CI group had less sensitivity for spectral cues. Word recognition varied between 20 and 96% correct, with a mean accuracy of 66.5% for their clinical group, whereas the task posed little perceptual challenge to the NH group, as their mean accuracy was 97.1%. These word recognition scores were predicted by spectral cue sensitivity and attention, suggesting that speech perception deficits at the phonetic, that is the sub‐segmental, level affect those at the word level.
In a gated word recognition study, Patro and Mendel (2018) found that CI users needed on average around 35% more speech information to recognize words than NH controls, and NH participants listening to vocoded speech needed approximately 25% more information. The fact that these two groups had relatively low performance suggests that CI users' disadvantage is, at least in part, due to spectrotemporal signal degradation caused by the electrical–neuronal perceptual bottleneck, and not merely to extra‐auditory factors such demographic group characteristics. When contextual information was provided by inserting the target words either in semantically relevant (e.g., “Paul took a bath in the TUB”) or semantically neutral (e.g., “Smith knows about the TUB”) sentences, words were recognized more easily (cf. Holt, Yuen, & Demuth, 2017). Moreover, CI users benefited more from this top‐down information than the controls did. This shows that signal degradation affects word recognition and that CI users will often rely more on contextual cues.
A number of conclusions can be drawn from the studies that are summarized above. Firstly, CI users can attain a high level of word recognition. Secondly, there is a great amount of individual variation in these recognition abilities, for which a wide range of factors may be responsible. Thirdly, there is evidence that lower‐level problems stemming from signal degradation are partly responsible for higher‐level problems in speech perception. Finally, top‐down information supports speech perception, improving CI listeners’ understanding of speech by allowing partial compensation for the spectral degradation that use of their implant necessarily entails.