Читать книгу The Concise Encyclopedia of Applied Linguistics - Carol A. Chapelle - Страница 125

Tests Under the Speech Act Construct

Оглавление

The first comprehensive test development project for L2 pragmatics was Hudson, Detmer, and Brown's (1992, 1995) test battery. They focused on sociopragmatic appropriateness for the speech acts request, apology, and refusal by Japanese learners of English, and designed their instruments around binary settings of the context variables power, social distance, and imposition (Brown & Levinson, 1987). Hudson et al. (1992, 1995) compared several different assessment instruments, but, like many studies in interlanguage pragmatics (Kasper, 2006) relied heavily on discourse completion tests (DCTs). A DCT minimally consists of a situation description (prompt) and a gap for test takers to write what they would say in that situation. Optionally, an opening utterance by an imaginary interlocutor can precede the gap, and a rejoinder can follow it. Figure 1 shows a DCT item intended to elicit a request.

Hudson et al.'s (1995) instrument included traditional written discourse completion tests (DCTs); spoken DCTs, where the task input was in writing but test takers spoke their response; multiple‐choice DCTs; role plays; and two types of self‐assessment questionnaires. Test taker performance was rated on a five‐step scale for use of the correct speech act, formulaic expressions, amount of speech used and information given, formality, directness, and politeness. This pioneering study led to several spin‐offs. Yamashita (1996) adapted the test for native‐English‐speaking learners of Japanese, Yoshitake (1997) used it in its original form, and Brown and Ahn (2011) report on an adaptation for Korean as a target language. In a review, Brown (2001, 2008) found good reliability for the role plays, as well as the oral and written DCTs and self‐assessments, but the reliability of the multiple‐choice DCT was low. This was disappointing as the multiple‐choice DCT was the only instrument in the battery that did not require raters, which made it the most practical of all the components. In subsequent work, Liu (2006) developed a multiple‐choice DCT for first language (L1) Chinese‐speaking learners of English and reported high reliabilities. Tada (2005) used video prompts to support oral and multiple‐choice DCTs and obtained reliabilities in the mid .7 range.

A more recent sociopragmatically oriented test battery was developed by Roever, Fraser, and Elder (2014). While the focus of this battery was also on measuring test takers' perception and production of appropriate language use, it was delivered through an online system and designed to be completed in less than an hour. The battery included metapragmatic judgments of short conversations, speech acts, and responses in short dialogs, as well as productive correction of inappropriate dialog responses and completion of DCTs with multiple gaps. Figure 2 shows an example of a metapragmatic judgment item for the speech act of refusal. Roever et al. (2014) found acceptable reliability of .8 but noted that most of their tasks were easy for their test taker sample, which consisted of participants at intermediate level and above to avoid strong proficiency effects.


Figure 1 DCT item


Figure 2 Metapragmatic judgment item

While speech acts have been the feature of central interest in the assessment of L2 pragmatics in this tradition, not all work has focused exclusively on them. Bouton (1988, 1994, 1999) did pioneering work in the assessment of implicature, that is, how speakers convey additional meaning beyond the literal meaning of the words uttered. He distinguished two types of implicature, idiosyncratic and formulaic, with the former encompassing conversational implicature (Grice, 1975), whereas the latter includes some specific types of implicature, such as indirect criticism, variations on the Pope Q (“Is the pope Catholic?”) and irony. Using this test, Bouton found that idiosyncratic implicature is fairly easy to learn on one's own but difficult to teach in the classroom, whereas the reverse is the case for formulaic implicature. Taguchi (2005, 2007, 2008a, 2008b) employed a similar instrument and took a psycholinguistic perspective on implicature, investigating learners' correct interpretation in conjunction with their processing speed. Taguchi, Li, and Liu (2013) developed an implicature test for Mandarin as a target language.

A small number of studies have combined assessment of different aspects of pragmatic competence. Roever (2005, 2006) developed a Web‐based test of implicature, routine formulas, and speech acts, and validated it using Messick's (1989) validation approach. Unlike Hudson et al.'s (1995) test, Roever's (2006) instrument focused on pragmalinguistic rather than sociopragmatic knowledge. Figure 3 shows an implicature item from Roever's (2005) test, and Figure 4 shows a routines item.

Roever's test was Web delivered and he obtained an overall reliability of .91. It covered the construct of L2 pragmatic knowledge in quite some breadth, and had a high degree of practicality due to its Web‐based delivery. However, it was clearly set in the speech act tradition and did not assess discursive abilities.

Itomitsu (2009) developed a pragmalinguistically focused instrument for Japanese as a target language. Using Web‐delivered multiple‐choice tasks, he assessed learners' knowledge of routine formulas, speech styles, and understanding of the illocutionary force of speech acts. The test also included a grammar section. Itomitsu attained overall high reliability similar to Roever's (2005) with a test that is arguably more practical, as it does not require writing or rater scoring.


Figure 3 Implicature item from Roever (2001)


Figure 4 Routines item from Roever (2001)

The instruments discussed above represent a significant step in testing pragmatics by demonstrating that aspects of learners' pragmatic ability can be assessed practically and with satisfactory reliability. However, the speech act framework underlying these tests has come under severe criticism (Kasper, 2006) as it was based strongly on the discourse‐external context factors identified by Brown and Levinson (1987), atomized speech acts rather than considering them in their discursive context, and used DCTs, which have been shown to be highly problematic (Golato, 2003). This has led to the emergence of tests taking an interactional view.

The Concise Encyclopedia of Applied Linguistics

Подняться наверх