Читать книгу The Handbook of Speech Perception - Группа авторов - Страница 44

Anatomy and tonotopicity of the human primary auditory cortex

Оглавление

In humans the primary auditory cortex (PAC) is located around a special wrinkle in the cortical sheet, known as Heschl’s gyrus (HG). A gyrus is a ridge, where the cortical sheet is folded outward, while a sulcus describes an inward fold or valley. There are multiple HG in each brain. First, all people have at least two HG, one in each cerebral hemisphere (the left and right halves of the visible brain). These are positioned along the superior aspect of each temporal lobe. In addition, some brains have a duplication in the HG, that is, one or both hemispheres have two ridges instead of one (Da Costa et al., 2011). This anatomical factoid can be useful for identifying PAC (also known as A1) in real brains (as we shall see in Figure 3.5). However, the gyri are used only as landmarks: what matters is the sheet of neurons in and around HG, not whether that area is folded once or twice. This sheet of neurons receives connections from the subcortical auditory pathways, most prominently via the medial geniculate body of the thalamus (see Figure 3.1 and the previous section).

When the cortex is smoothed, in silico, using computational image processing, the primary auditory cortex can be shown to display the same kind of tonotopic maps that we observed in the cochlea and in subcortical regions. This has been known from invasive microelectrode recordings in laboratory animals for decades and can be confirmed to be the case in humans using noninvasive MRI (magnetic resonance imaging) by playing subjects stimuli at different tones and then modeling the optimal cortical responses to each tone. This use of functional MRI (fMRI) results in the kind of tonotopic maps shown in Figure 3.5.


Figure 3.5 Tonotopic map. HG = Heschl’s gyrus; STG = superior temporal gyrus; SG = supramarginal gyrus.

(Source: Adapted from Humphries, Liebenthal, & Binder, 2010.)

Figure 3.5 depicts a flattened view of the left‐hemisphere cortex colored in dark gray. Superimposed onto the flattened cortex is a tonotopic map (grayscale corresponding to the color bar on the bottom right). Each point on the surface of the tonotopic map has a preferred stimulus frequency, in hertz, and along the dotted arrow across HG there is a gradient pattern of responses corresponding to low frequencies, high frequencies, and then low frequencies again. Given this tonotopic organization of the primary auditory cortex, which is in some respects not that different from the tonotopy seen in lower parts of the auditory system, we may expect the nature of the representation of sounds (including speech sounds) in this structure to be to a large extent spectrogram‐like. That is, if we were to read out the firing‐rate distributions along the frequency axes of these areas while speech sounds are represented, the resulting neurogram of activity would exhibit dynamically shifting peaks and troughs that reflect the changing formant structure of the presented speech. That this is indeed the case has been shown in animal experiments by Engineer et al. (2008), who, in one set of experiments, trained rats to discriminate a large set of consonant–vowel syllables and, in another, recorded neurograms for the same set of syllables from the primary cortices of anesthetized rats using microelectrodes. They found, first, that rats can learn to discriminate most American English syllables easily, but are more likely to confuse syllables that humans too find more similar and easier to confuse (e.g. ‘sha’ vs. ‘da’ is easy, but ‘sha’ vs. ‘cha’ is harder). Second, Engineer et al. found that the ease with which rats can discriminate between two speech syllables can be predicted by how different the primary auditory cortex neurograms for these syllables are.

These data would suggest that the representation of speech in primary auditory cortex is still a relatively unsophisticated time–frequency representation of sound features, with very little in the way of recognition, categorization, or interpretation. Calling primary auditory cortex unsophisticated is, however, probably doing it an injustice. Other animal experiments indicate that neurons in the primary auditory cortex can, for example, change their frequency tuning quickly and substantially if a particular task requires attention to be directed to a particular frequency band (Edeline, Pham, & Weinberger, 1993; Fritz et al., 2003). Primary auditory cortex neurons can even become responsive to stimuli or events that aren’t auditory at all if these events are firmly associated with sound‐related tasks that an animal has learned to master (Brosch, Selezneva, & Scheich, 2005). Nevertheless, it is currently thought that the neural representations of sounds and events in the primary auditory cortex are probably based on detecting relatively simple acoustic features and are not specific to speech or vocalizations, given that the primary cortex does not seem to have any obvious preference for speech over nonspeech stimuli. In the human brain, to find the first indication of areas that appear to prefer speech to other, nonspeech sounds, we must move beyond the tonotopic maps of the primary auditory cortex (Belin et al., 2000; Scott et al., 2000).

In the following sections we will continue our journey through the auditory system into cortical regions that appear to make specialized contributions to speech processing, and which are situated in the temporal, parietal, and frontal lobes. We will also discuss how these regions communicate with each other in noisy contexts and during self‐generated speech, when information from the (pre)motor cortex influences speech perception, and look at representations of speech in time. Figure 3.6 introduces the regions and major connections to be discussed. In brief, we will consider the superior temporal gyrus (STG) and the premotor cortex (PMC), and then loop back to the STG to discuss how brain regions in the auditory system work together as part of a dynamic network.


Figure 3.6 A map of cortical areas involved in the auditory representation of speech. PAC = primary auditory cortex; STG = superior temporal gyrus; aSTG = anterior STG; pSTG = posterior STG; IFG = inferior frontal gyrus; PMC = pre‐motor cortex; SMC = sensorimotor cortex; IPL = inferior parietal lobule. Dashed lines indicate medial areas.

(Source: Adapted from Rauschecker & Scott, 2009.)

The Handbook of Speech Perception

Подняться наверх