Читать книгу The Science of Reading - Группа авторов - Страница 65
Bridging the Gap with Sentence Reading
ОглавлениеOrthographic processing provides the central interface between visual processing and higher‐level syntactic and semantic processing, with a key role played by whole‐word orthographic representations in this process. In this final section, I consider the crucial interface between whole‐word orthographic representations and the syntactic and semantic representations required for sentence comprehension. The central hypothesis here is that once a whole‐word orthographic representation is activated (not necessarily identified), that word identity (i.e., a specific whole‐word orthographic representation) is associated, not necessarily unambiguously, with a syntactic category (i.e., part‐of‐speech, Snell et al., 2017; see Declerck et al., 2020, for supporting evidence) and a meaning. Our understanding of this critical interface is limited, not the least because word reading and sentence reading tend to be investigated within two independent lines of research (see Liversedge et al., this volume). Important evidence concerning the word‐sentence interface has been obtained with three recently developed (or redeveloped) paradigms: a reading version of the classic Eriksen flankers task, the Rapid Parallel Visual Presentation (RPVP) of written words, and the grammatical decision task (the sentence‐level equivalent of the lexical decision task). I focus on the first of these paradigms since it provides the closest link with orthographic processing and single word recognition.
Dare and Shillcock (2013) adapted the flankers task (Eriksen, 1995) to investigate orthographic processing and reading. Typically in this type of task, participants respond to central target words that are flanked on the left and right by stimuli that are irrelevant for the task. Target and flankers are presented together for a brief duration (typically 150–170 ms) and the flanking stimuli are related to targets on a given dimension, or unrelated to targets. Dare and Shillcock (2013) revealed effects of orthographic relatedness when the task is lexical decision (see also Grainger et al., 2014). For example, the target word ROCK is processed more quickly when flanked by its component letters such as in the sequence RO ROCK CK, compared with unrelated letters (e.g., PA ROCK TH). Crucially, this effect does not depend on the location of the overlapping letters (e.g., CK ROCK RO), hence prompting Grainger et al. (2014) to propose that orthographic information spanning the target and flankers is processed in parallel and integrated into a single processing channel for word identification (see Figure 3.4). Since the bigram representations in this model are themselves unordered (i.e., a bag‐of‐bigrams), the effect of related flanking bigrams (RO, CK) does not depend on their location.
Further work using word flankers has revealed that syntactic (Snell et al., 2017) and semantic (Snell, Declerck et al., 2018) information can be processed in parallel in multiword displays. This converges with evidence showing parallel word processing in both the Rapid Parallel Visual Presentation paradigm (Snell & Grainger, 2017) and the grammatical decision task (Mirault et al., 2018; Mirault & Grainger, 2020; see Snell & Grainger, 2019, for a summary of the evidence). Together, these findings have informed the development of a theoretical framework that both integrates word identification processes in an account of sentence‐level processing and specifies how different word identities can be simultaneously mapped onto distinct spatiotopic locations during reading, defined as locations within a line of text independent from where the eyes are fixating (Snell et al., 2017; Snell, van Leipsig, et al., 2018).
At a more general level of theorizing, it is possible to draw an interesting parallel between the letter‐word processing interface, the focus of the present chapter, and the word‐sentence interface. The key words that summarize the kind of processing involved at both interfaces are cascaded and interactive, as implemented in the interactive‐activation model (McClelland & Rumelhart, 1981; see Carreiras, Armstrong, Perea, & Frost, 2014, for a review of the evidence from neuroimaging studies, and Heilbron et al., 2020, for MEG evidence for interactivity). One can add a third key word, parallel processing, to the extent that letters in words are processed in parallel, and I would argue that, to a certain extent at least, the same is true for words in sentences.