Читать книгу Subtitling Television Series - Blanca Arias-Badia - Страница 16
ОглавлениеChapter 2 Norms: A cross-disciplinary concern
As already discussed, the current study may be circumscribed to three main disciplines, namely Television Studies, Linguistics, and Translation Studies. This chapter is concerned with a notion that lies across these three disciplines and is key to the contents of this volume: norms. A look at the definition of norm in a dictionary is enough to gather information about the ambivalence surrounding this concept and the ideas typically attributed to it. The Oxford English Dictionary defines the word as being used to describe something usual, standard, expected, following a pattern, while the Merriam Webster alerts the reader that norm is associated with proper behaviour, thus suggesting the prescriptive value of the term.
The twofold nature of norms, that is, their prescriptive and descriptive nature, was already present when the term was coined in the field of Sociology in the 1930s. Since then, the term has been adopted by several other fields, among which the three disciplines that converge in the present book. It is the descriptive sense of the term norm that underpins the present book. Its goal is to observe tendencies, patterns of behaviour in TV productions and their translations.
2.1. Norms in Film and Television Studies
The corpus compilation criteria for this book rest on the assumption that audiovisual texts may be classified as belonging to different genres. The notion of genre dates back to classical rhetorics. Already in the fourth century bc, Aristotle defined it in terms of convention and purpose in ←9 | 10→his Poetics and proposed that each genre has a specific style or mode of expression, that is, epic (narrative), tragic (drama) and lyric (poetry). The term has been widely used in many areas of specialisation, crucially in Linguistics and Literary Theory, though in more recent years has been regarded as too global and fuzzy a concept to be of much use to detailed formal and functional analysis by some practitioners.
Film and Television Studies being a far younger discipline than Literature Studies, has meant that theorists in the former discipline have drawn on the advances made in the latter in an attempt to account for genres in the televisual field. Feuer (1992) considers television genre a means of classification of plot types, envisaged to reveal similarities and differences among textual types. Her definition closely resembles theoretical proposals from the field of Literature, such as Wellek and Warren’s (1963/1949), Frye’s (2002/1952), Todorov’s (1976) or Ryan’s (1988/1979). Neale and Turner (2001: 1) are clear about the overlap of literary and television studies when it comes to the study of genre, paying special attention to the concept of norm:
‘Genre’ is a French word meaning ‘type’ or ‘kind’. As such, it has played an important role in the study of literature, theatre, film, television and other art and media forms. It has long been recognised that output in each of these fields can be grouped into categories, and that each category or class is marked by a particular set of conventions, features and norms.
As noted by these scholars, ‘[g];enre is the product of a text- and audience-based negotiation activated by the viewer’s expectations’ (ibid.: 7). This idea of genre as key contextual information for audiences is also pointed out by Bednarek (2010: 120), who states that ‘how viewers/practitioners describe/view characters may depend on the genre of the particular fictional television narrative’. It is important to note, however, that viewers’ expectations may not be conscious.
Once that recognition has been acknowledged as indispensable to identify genres, the question arises as to whether this feature is also fulfilling other functions in the production and reception of TV products. The first function that scholars have deemed central to it is sense attribution (Wolf 1985: 139). The link between recognition and sense attribution ←10 | 11→is also present in McKee (1997: 71), who states that ‘the first step toward a well-told story is to create a small, knowable world’. From the field of TS, Chaume (2003) further highlights the usefulness of genre for sense attribution, considering the identification of the intentionality behind a text as a necessary preliminary task for translation professionals.
Tous-Rovirosa (2010: 229) attributes a second function to the recognition of television products: it does not only help to understand a text, but is also a source of pleasure for audiences. The notion of enjoyment caused by revisiting a work of art has been pointed out by scholars in other areas, such as Calvino (1995) in his collection of essays for the reading and (re-reading) of classic literary pieces.
From the point of view of the screenwriter, McKee (1997: 91) assigns a third function to recognition: its creative potential. The author explores the idea of creative limitations and states that in screenwriting,
genre conventions […] do not inhibit creativity, they inspire it. The challenge is to keep convention but avoid cliché. […] With mastery of the genre we can guide audiences through rich, creative variations on convention to reshape and exceed expectations by giving the audience not only what it had hoped for but, if we’re very good, more than it could have imagined.
This opinion seems to be shared by other professionals in screenwriting, like Wolff and Cox (1988: 241), who invite trainees to write stories offering something that is just ‘a bit new’, the implication being that it is good (or normal) to stick to genre conventions.
In addition to these three functions of genre and recognition, which foreground the conceptualisation of genre as a group of texts sharing formal features, there are further conditionings that affect the language used in TV series, mainly imposed by broadcasting companies, as discussed in subsequent chapters. Recent contributions such as Brown’s (2013: 3) point to commercial dominance as one of the main factors dominating what constitutes TV genre:
[I];t is not altogether clear whether the ‘genre’ should be defined chiefly in formal, commercial or industrial terms. Whilst we should not rule out the possibility of a more traditionally text-based formulation that considers recurring narrative and structural patterns or ideological overtones, such a project would be a major undertaking.
←11 | 12→
With regard to the sharing of formal features, hybridity of genres and programming formats has come to be a common practice in the TV industry today (Allen 1989). The distinction between both concepts, that is, genre and format, is pointedly explained by Neale and Turner (2001: 7) as follows:
Formats can be original and thus copyright, franchised under licence, and traded as a commercial property. Genres, by definition, are not original. Format is a production category with relatively rigid boundaries that are difficult to transgress without coming up with a new format. […] Genre is the larger, more inclusive category and can be used to describe programmes that use a number of related formats, such as the game show.
Hybridation of genres in different TV formats has paved the way for scholars to speak of ‘generic marks that guarantee the recognition of the different genres by the spectator in a single TV product’ (Tous-Rovirosa 2010: 59, my translation). Such generic marks include basically specific visual elements or settings as well as thematic recurrence. As a visual topos, Tous-Rovirosa (2013: 21), provides the example of a blackboard in TV series where a talented character, the nerd, displays their ideas for a police team (i.e. for the audience) to better follow them. Chapter 4 reports on other recurrent visual topoi used in the series that make up the corpus under scrutiny.
The present study raises the question of whether specific lexical or syntactic structures may constitute generic marks (norms?) in audiovisual products. This idea has been previously explored in the case of written specialised texts, where Gläser (1979), Göpferich (1995), and Wilss (1997) have observed recurrent text blocks. Tannen (1982: 51) also speaks of ‘ritual texts’ that display recurrent resources depending on whether it is written or spoken language. Bednarek (2010: 66–67), in turn, argues that there are identifiable feature conventions in TV dialogue, as well as ‘genre-specific vocabulary and discourse’, and goes on to state that:
[I];t is further likely that different genres of fictional television have linguistic differences, with more witty, fast-paced dialogue in sitcom and other comedy genres, ←12 | 13→and more institutional discourse, technical language, criminal cant, jargon or slang in crime series […], medical dramas […] or sci-fi series.
The notion is somewhat present in Chaume’s (2003: 201) suggestion that TV series hold ‘predictable elements’ that appear in an expectable order, as well as in Baños (2014: 81) allusion to the ‘specificities of the audiovisual medium’. Arias-Badia and Brumme (2014) provide a first approach to recurrent linguistic patterns in TV police procedurals. As shown in Chapter 4, the fact that fictional characters play with genre conventions is evidence that the language of these series is conventionalised and based on a long genre tradition.
Linguistic research has long been concerned with the identification and description of systematic, recurring patterns of language use, based on the idea that much language use is routine (Stubbs 1993; Hanks 1996). The recurrence of these patterns matches Coseriu’s (1952) description of norm as the objectively established principles followed by speakers of a language, rather than rules imposed by subjective assessment criteria.
Thus, the term norm is understood as descriptive in this framework. As Curzan (2014: 18) points out, ‘[d];escriptive “rules” describe regularities in a language variety’s structure that are developed through analysis of what speakers do; they are sometimes invariant but not always’. Interestingly for the purposes of this study, both norms and genre were variables already under consideration in the classical approach to communicative events referred to by the acronym SPEAKING (situation, participants, ends, acts, key, instrumentalities, norms, and genre), proposed by Hymes (1989/1972).
Particularly since the 1960s, when the Brown Corpus was compiled, corpora have been a major tool employed by linguists for the exploration of patterns or norms. As defined by Sinclair (1991: 171), corpora are ‘collection[s]; of naturally occurring language text, chosen to characterise ←13 | 14→a state or variety of language’. Laviosa (2003: 105) elaborates on previous definitions by adding that, in corpora, the texts are ‘stored on a computer, sometimes analysed automatically or semi-automatically’. The digitalisation of texts makes it easier for researchers to share their material and gain access to larger amounts of data, thus fostering quantitative significance of the results – the more occurrences of a norm may be observed, the more likely that norm is to be representative of the language explored.
Traditionally, working on Corpus Linguistics (CL) has meant to resort to already existing large corpora, of both written and spoken language. However, it may be argued that CL has evolved into some kind of methological tool or research methodology for many areas of knowledge within Linguistics, and even for other disciplines (Olohan 2004; Walsh et al. 2011; Laviosa 2013). In this paradigm, researchers either use the large corpora available or compile ad hoc–designed, often smaller, corpora that better fit their specific research purposes and interests. In this respect, the selection of texts in accordance with research criteria is an essential feature that distinguishes corpora from ‘more random collections of texts held in texts archives’ (Barnbrook 1996: 23).
Within the field of Linguistics, a paradigmatic shift in relation to corpora perception took place in the 2000s, when some researchers started to argue for the need to undertake corpus-driven studies to the detriment of corpus-based studies, which had been the norm until then. In this spirit, Tognini-Bonelli (2001: 84–85) claims that:
Corpus-driven linguistics rejects the characterisation of corpus linguistics as a method and claims instead that the corpus itself should be the sole source of our hypotheses about language. It is thus claimed that the corpus itself embodies a theory of language.
Corpas-Pastor (2008: 53, my translation) offers a table that succinctly compares the main differences between both approaches (Table 1).
Corpus-based approaches | Corpus-driven approaches |
A priori theoretical assumptions | A posteriori theoretical assumptions |
Inductive method | Deductive method |
Intuition and introspection | Statistical methods |
Lemmatised, annotated corpora | Non-codified, large and representative corpora |
Grammar patterns | Lexicogrammatical and phraseological patterns |
Corpus-driven studies confer more power to data, to the text in hand, than corpus-based studies do. The latter use corpora to answer a preconcibed research question (e.g. ‘How often does nominalisation occur in this text?’), while in corpus-driven studies emphasis is placed on the text rather than on the research questions, that is, it is the text that leads the researcher to specific questions. If results are corpus-driven, it means that ←14 | 15→researchers have first approached the text trying to detach themselves from any assumptions derived from any previous specialised literature. Given their non-prejudicial nature, corpus-driven studies have been compared with a ‘tabula rasa’ (Corpas-Pastor 2008: 52).
Hanks’s (2004, 2013a) ongoing lexicographic project, the Pattern Dictionary of English Verbs (PDEV), adopts a corpus-driven approach to lexicon to tease out linguistic norms and the entries correspond to the different semantic patterns of each word, manually annotated from corpora. The scholar has proposed the Theory of Norms and Exploitations (TNE) together with a specific work methodology in CL, namely Corpus Pattern Analysis (CPA), which have served as a framework for the qualitative study of lexicon in this book (Chapter 8). As specified in Jezek and Hanks (2010: 8), this type of lexicographic task consists of ‘us[ing] corpus evidence to tease out the different patterns of use associated with each word in a language and to discover the relationship between meaning and patterns of usage’. Thus, the perception of norms in TNE intersects with the use given to norm in quantitative linguistic research, to mean prototypical, average use of the language, from which creative realisations may deviate to a greater or lesser extent (Muller 1973, 1992).
Corpas-Pastor (2008: 218, my translation) explains that undertaking corpus-driven research does not mean to undertake objective research: ‘Former paradigms, reference frameworks, ideological positions, cultural traditions and key concepts, among others, are bound to influence ←15 | 16→the researcher’s perspective’. Manual annotation, as in the case of the PDEV, necessarily involves the annotator’s subjectivity (Church and Hanks 1990; Renau 2012). The point is, however, that in corpus-driven approaches, ‘intuitions are used to interpret data, but not to create it’ (Hanks 2004: 250). In other words, as posited by Sinclair (2004a: 6), ‘intuition can help in many ways in language research, in conjunction with other criteria of a more examinable nature’.
2.3. Norms in Translation Studies
The difficulty and subjectivity entailed in applying labels to corpora in accordance with semantic, syntactic or pragmatic criteria is strong evidence that languages are not fully measurable, mathematical entities, as emphasised by Pedersen (1997: 111): ‘[I];t seems to me an illusion that any student of the arts can limit himself to objective statements about his object of study’.
The author was probably referring here to scholars who, already from the 1960s but crucially in the 1990s, had advocated the implementation of scientific methods to the study of translations, to the detriment of a prescriptive approach to translation ‘geared primarily to formulating rules, norms or guidelines for the practice or evaluation of translation or to developing didactic instruments for translation training’ (Hermans 1999: 7). As summarised by Chaume (2013: 292), ‘DTS is also interested in how systems influence each other, or the relationship between the source and the target cultures’.
In this new paradigm, Toury (1995) advocated DTS as a means to provide the discipline of Translation with rigorous approaches to its object of study, that is, translations. In the framework of DTS, research accounts for existing translated material without emitting value judgements about it and in a systematic manner, supported by statistical measures. Because it is based on observable facts, it has also been regarded an empirical approach to translation, as opposed to theory of translation (theoretical) or ←16 | 17→applied translation studies (aprioristical) (Rabadán 1991; Hermans 1999). Therefore, DTS shares with CL the fact that it is a specific methodological approach to data (Olohan 2004).
Toury (1995) makes a case for DTS as a valid scientific method within Holmes’s (2004/1988) pure TS, and further proposes the Theory of Norms for TS, explaining that translation ‘fulfil[s]; a function allotted by a community’, in a specific sociocultural context. As posited by other scholars, such as Venuti (1995: 306), translation is primarily perceived as a social, cultural activity – the target text is ‘the site where a different culture emerges, where a reader gets a glimpse of “a cultural other”’.
Borrowing from social sciences, DTS see translation as a social, norm-governed activity, closely linked with the translator’s behaviour. Hermans (1999: 80) specifies that the term norm does not only refer to regular patterns of behaviour but also to ‘the underlying mechanism which accounts for this regularity’. His criticism of Toury’s proposal is useful to highlight two aspects connected with DTS: (a) their social nature, and (b) their ability to make predictions about future translational behaviour:
The mechanism is a psychological and social entity. It mediates between the individual and the collective, between the individual’s intentions, choices and actions, and collectively held beliefs, values and preferences. Norms bear on the interaction between people, more especially on the degree of coordination required for the continued, more or less harmonious coexistence with others in a group […]. Norms contribute to the stability of interpersonal relations by reducing uncertainty. They make behaviour more predictable by generalizing from past experience and making projections concerning similar types of situation in the future. They have a socially regulatory function. (Ibid.)
The author further criticises Toury for not considering norms as templates, as had been done before in Literary Studies (§2.1). In Hermans’ view, norms are like (generic) marks and, if literary scholars had considered genre a template for writers, Hermans (1999) sees translational norms as catalogues for types of solutions to which professional translators may resort. In a similar vein, Martínez Sierra (2011) underlines that the existence of behavioural norms entails the possibility for different norms (translation options, solutions) to exist. In this respect, the choice ←17 | 18→authors make ‘is meaningful against the background of the choices they rejected’ (Olohan 2004: 146).
Norms effectively take over the focus of interest that in TS had previously been occupied by the notion of equivalence. As noted by Pym (1995: 160), equivalence ‘always implied the possibility of non-equivalence, of non-translation or a text that was in some way not fully translational’, an approach that diametrically contradicts the basis of DTS, in which the study of actual translations by definition assumes that the object of study is fully translational. Indeed, for Toury (1995: 61), ‘it is norms that determine the (type and extent of) equivalence manifested by actual translations’.
Toury (1995) suggests that the aim and scope of researchers in DTS is to ascertain rules, norms and tendencies, from more to less established patterns or regularities, in translational behaviour. As put by Laviosa (2002: 79):
This type of analysis is performed not to evaluate the quality of a given translation, but to understand the decision-making process underlying the product of translation and to infer from it the translational norms adopted by the translator.
Toury (1995) lists a typology of norms that govern all the phases of the translation process, from assignment to submission. The focus in these pages is on operational, textual-linguistic norms and the way in which they have been identified to date by means of corpus research. In the scholar’s words, ‘[t];extual-linguistic norms […] govern the selection of material to formulate the target text in, or replace the original textual and linguistic material with’ (ibid.: 59). The sources for the study of textual-linguistic norms may be textual, that is, the text under examination, or extratextual, that is, statements made by people connected with the text in hand. In the CoPP this could include interviews with screenwriters or TV producers, as well as with subtitlers.
Since the 1990s, researchers especially interested in the study of translations through textual sources have adopted CL as a methodology for TS. Baker (1996: 177) uses the certainty-marker will to guarantee that a ‘computerised corpus will reveal regularities’ in translation practice. The idea of regularity is emphasised in Olohan (2004: 16), who argues that corpus research is not merely concerned with ‘observable’ data, but rather with ‘what is regular, typical and frequent’.
←18 | 19→
Ultimately, the aim of translational corpus studies is to uncover ‘what is probable and typical in translation, and through this, [to interpret] what is unusual’ (ibid.). This approach has led to a substantive number of studies wanting to identify universals, tendencies in translation which could be observed for all language combinations and in all translations. However, the notion of universals has been challenged repeatedly in the literature (Toury 2004; Corpas-Pastor 2008). According to Chesterman (2004: 11):
What ultimately matters is perhaps not the universals, which we can never finally confirm anyway, but new knowledge of the patterns, and patterns of patterns, which helps us to make sense of what we are looking at.
Scholars seem to agree that the main advantage of Translation Studies undertaken with corpora is the possibility to gather more significant quantitative data (Martínez Sierra 2011; Díaz-Cintas 2008a). On the other hand, three shortcomings of corpus methodology are:
a) text alignment or KWIC concordances may hinder identification of suprasegmental phenomena (Malmkjær 1998; Laviosa 2002; Zabalbeascoa 2004);
b) it is difficult to draw the line between tendencies and norms, which depend on corpora representativeness (Tognini-Bonelli 2001; Corpas Pastor and Seghiri 2007, 2010, 2016; Martínez Sierra 2011);
c) for the specific case of AVT, ‘corpus investigations focusing exclusively on the verbal component are at risk of overlooking the importance of the other semiotic codes to the meaning making process in audiovisual products’ (Díaz-Cintas 2008a: 3).
By adopting a corpus-driven approach, the present study aims to uncover patterns of types of solutions, tendencies or norms that point at decision-making in the English–Spanish subtitling of a selection of TV shows.
←19 | 20→←20 | 21→