Читать книгу The Concise Encyclopedia of Applied Linguistics - Carol A. Chapelle - Страница 117

Tasks for Assessing Listening

Оглавление

Decisions about the characteristics of the desired listening assessment tasks should be based on the purposes of the test, the test takers' personal characteristics, and the construct that the test is designed to measure (Bachman & Palmer, 2010). Buck (2001) provided the following guidelines concerning listening tasks, which may be applicable to most listening test contexts: (a) listening test input should include typical realistic spoken language, commonly used grammatical knowledge, and some long texts; (b) some questions should require understanding of inferred meaning (as well as global understanding and comprehension of specific details) and all questions should assess linguistic knowledge—not that which is dependent on general cognitive abilities; and (c) test takers' background knowledge on the content to be comprehended should be similar. The message conveyed by the input, not the exact vocabulary or grammar used to transmit the message, as well as various types of interaction and levels of formality should also be assessed.

In practice, listening assessment tasks require learners to listen to input and then provide evidence of comprehension by responding to questions about the information conveyed in the input. The most common types of comprehension questions are selected response items, including multiple‐choice, true/false, and matching. For these item types, test takers are required to select the most appropriate answer from options which are provided. These options may be based on words, phrases, objects, pictures, or other realia. Selected response items are popular, in part, because they can be scored quickly and objectively. An important question to answer when designing selected response item types is whether or not to provide people with the questions and possible responses prior to the input, especially since including them has been shown to favor more proficient test takers (Wu, 1998) and certain item types are affected differentially by the inclusion of item stems or answer options, or both (Koyama, Sun, & Ockey, 2016).

Constructed response item types are also commonly used. They require test takers to create their own response to a comprehension question and have become increasingly popular. These item types require short or long answers, and include summaries and completion of organizational charts, graphs, or figures. One item type that has received increasing attention is an integrated listen–speak item. Test takers listen to an oral input and then summarize or discuss the content of what they have heard (Ockey & Wagner, 2018). Constructed response item types have been shown to be more difficult for test takers than selected response item types (In'nami & Koizumi, 2009) and may therefore be more appropriate for more proficient learners. Most test developers and users have avoided using constructed response item types because scoring can be less reliable and require more resources. Recent developments in computer technology, however, have made scoring productive item types increasingly more reliable and practical (Carr, 2014), which may lead to their increased use.

Another listening task used in tests today is sentence repetition, which requires test takers to orally repeat what they hear, or the analogous written task of dictation, which requires people to write what they hear. As with constructed response items, computer technology has made the scoring of sentence repetition and dictation objective and practical. Translation tasks, which require test takers to translate what they hear in the target language into their first language, is also a popular task used for assessing listening, especially when everyone who is assessed has the same first language.

The Concise Encyclopedia of Applied Linguistics

Подняться наверх