Читать книгу The Concise Encyclopedia of Applied Linguistics - Carol A. Chapelle - Страница 168

Task Features

Оглавление

Weigle (2002, p. 63) provides a taxonomy of task dimensions for writing assessment. These can be divided into the features of the writing task itself (what test takers actually respond to) and features of the test, which include administrative and logistical considerations. Some important features of the test task include subject matter, discourse mode, and stimulus material, which are discussed briefly below.

 Subject matter. Research on the effects of subject matter is limited, in part because it is difficult to separate subject matter from discourse mode or other topic variables (Hamp‐Lyons, 1990). However, two broad distinctions can be made with regard to subject matter. First, some topics are essentially personal (e.g., descriptions of self or family, discussion of personal likes and dislikes) and others are nonpersonal (e.g., argument essays about controversial social issues). Research and experience suggest that nonpersonal topics may be somewhat easier to score more reliably; however, personal topics may be more accessible to all test takers and tend to elicit a wider range of responses. Within nonpersonal topics, and specifically in assessing writing for academic purposes, another distinction can be made between topics that are more general and those that are discipline specific. Here some research suggests, not surprisingly, that students may perform better on topics related to their disciplines than on more general topics (Tedick, 1990).

 Discourse mode. Discourse mode refers to the type of writing that candidates are expected to produce. The term discourse mode subsumes a cluster of task features such as genre (essay, letter, etc.), rhetorical task (e.g., narration, description, exposition), pattern of exposition (comparison/contrast, process, etc.) and cognitive demands (Huot, 1990). Research on the effects of specific features on writing test performance suggests that these factors may indeed influence performance in systematic ways; however, it is difficult to isolate individual factors or separate the effects of, for example, genre from cognitive demands. For test developers, perhaps the most important advice to keep in mind is to consider the authenticity of the task for specific test takers and, if test takers are offered a choice of tasks or if alternate forms of the test are used, that these discourse variables be kept as parallel as possible.

 Stimulus material. While many traditional writing assessment tasks consist merely of the topic and instructions, it is also common to base writing tasks on stimulus material such as pictures, graphs, or other texts. Hughes (2003) recommends that writing tasks be based on visual materials (e.g., pictures) to ensure that it is only writing and not content knowledge that is being assessed. At the other end of the spectrum, many academic writing tests use a reading passage or other text as stimulus material for the sake of authenticity; since academic writing is nearly always based on some kind of input text. Considerations for choosing an appropriate input text can be found in Weigle (2002) and Shaw and Weir (2007).

In addition to factors involving the task itself, several other factors that can be considered more logistical or administrative need to be addressed when designing a writing test; some of these include issues such as time allotment, instructions, whether or not to allow examinees a choice of tasks or topics, and whether or not to allow dictionaries. For a summary of research related to these issues, see Weigle (2002, chap. 5). One issue that has gained prominence over the past two decades is whether candidates should write responses by hand or by using computers; clearly, the use of computers is much more prevalent than it was even 10 years ago, and several large‐scale tests have begun requiring responses to be entered on a computer. Pennington (2003) reviewed the literature for handwriting versus word processing; briefly, this literature suggests that, for students who have proficient keyboarding skills, using the computer leads to higher quality writing and more substantial revisions. On the other hand, some studies suggest that raters tend to score handwritten essays higher than typed ones, even if they are otherwise identical (e.g., Powers, Fowles, Farnum, & Ramsey, 1994).

The Concise Encyclopedia of Applied Linguistics

Подняться наверх