Читать книгу The Concise Encyclopedia of Applied Linguistics - Carol A. Chapelle - Страница 158
Comprehension
ОглавлениеIf a gap‐filling task may be text based, can it really be considered a measure of vocabulary, rather than, say, reading comprehension ability? It may no longer be seen as a pure vocabulary test, but the underlying issue here is whether vocabulary acquisition is an end in itself or a means for the learner to use the second language more effectively for a variety of communicative purposes. At this point we move on to the second way of distinguishing receptive and productive vocabulary knowledge, namely comprehension versus use. The recognition and recall formats we have just discussed are intended to monitor learners' developing vocabulary knowledge, but they provide at best quite indirect evidence of whether learners can access their knowledge of words and exploit them effectively in performing real language‐use tasks.
In the case of comprehension, we are interested in the learners' ability to deal with vocabulary in texts which have the characteristics of natural spoken and written discourse. Depending on the learners' goals, the texts may represent conversations, lectures, train station announcements, or video clips; and newspapers, novels, college textbooks, or e‐mails. Comprehension tasks oblige the test takers to process the vocabulary in real time, which means they need both automatic recognition of high‐frequency words and the ability to process the input in chunks rather than word by word. This constraint is more obvious with respect to listening tasks, given the fleeting and ephemeral nature of speech, but it also applies to reading if the learners are to achieve adequate comprehension of the overall text. The test takers also need to understand lexical items in a rich discourse context, rather than as independent semantic units.
One important step in selecting texts for comprehension assessment is to evaluate the suitability of the vocabulary content for the learners' level of proficiency in the language, since it is unreasonable to expect them to understand a text containing a substantial number of unknown lexical items. Traditionally, this step is assisted by applying a standard readability formula, such as the Flesch Reading Ease score or the Flesch–Kincaid Grade Level score (both available in Microsoft Word), incorporating word frequency as a core component. Another approach is to submit the text to the VocabProfile section of the Compleat Lexical Tutor (www.lextutor.ca), which offers both color coding and frequency statistics to distinguish common words from those that occur less frequently. It should be noted that both these approaches are word based, so they may underestimate the lexical difficulty of texts containing idiomatic or colloquial expressions.
Vocabulary assessment for comprehension purposes is embedded, in the sense that it engages with a larger construct than just vocabulary knowledge or ability. In practical terms, this means that the vocabulary test items form a subset of items within the test. In addition, the focus of the items changes from simply eliciting evidence of the ability to recognize or recall word meanings to contextual understanding, through reading items like these:
The word “inherent” in line 17 means. ..
Find a phrase in paragraph 3 that means the same as “analyzing.”
Items may also assess lexical‐inferencing ability by targeting vocabulary items that the test takers are unlikely to know, but whose meaning can reasonably be inferred by clues available in the surrounding text.
By its nature, reading assessment offers more scope for items that focus on individual lexical units in the text than listening assessment, because a written text remains available for review in a way that a spoken text does not. In fact, it may be counterproductive to encourage learners to concentrate on vocabulary in a listening task, as Chang and Read (2006) found in their investigation of various forms of support for EFL learners taking a listening test. Pre‐teaching of key lexical items proved to be the least effective form of support, apparently because it drew the test takers' attention away from the propositional content of the text. On the other hand, if spoken or written texts represent a particular discipline, register, or genre, comprehension test items will provide at least indirect evidence of the ability to handle vocabulary appropriate for that type of text (see Read, 2007, for further discussion of this point), even if none of the test items focus explicitly on vocabulary.