Читать книгу Syntax - Andrew Carnie - Страница 21

6. WHERE DO THE RULES COME FROM?

Оглавление

In this chapter, we’ve been talking about our subconscious knowledge of syntactic rules, but we haven’t dealt yet with how we get this knowledge. This is sort of a side issue, but it may affect the shape of our theory. If we know how children acquire their rules, then we are in a better position to develop a proper formalization of them. The way in which children develop knowledge is an important question in cognitive science. The theory of generative grammar makes some very specific (and very surprising) claims about this.

6.1 Learning vs. Acquisition

One of the most common misconceptions about language is the idea that children and adults “learn” languages. Recall that the basic kind of knowledge we are talking about here is subconscious knowledge. When producing a sentence, you don’t consciously think about where to put the subject, where to put the verb, etc. Your subconscious language faculty does that for you. Cognitive scientists make a distinction in how we get conscious and subconscious knowledge. Conscious knowledge (like the rules of algebra, syntactic theory, principles of organic chemistry or how to take apart a carburetor) is learned. A lot of subconscious knowledge, like how to speak or the ability to visually identify discrete objects, is acquired. In part, this explains why classes in the formal grammar of a foreign language often fail abysmally to train people to speak those languages. By contrast, being immersed in an environment where you can subconsciously acquire a language is much more effective. In this text we’ll be primarily interested in how people acquire the rules of their language. Not all rules of grammar are acquired, however. Some facts about i-language seem to be built into our brains, or innate.

You now have enough information to answer GPS6.

6.2 Innateness: Parts of i-Language as Instincts

If you think about the other types of knowledge that are subconscious, you’ll see that many of them (for example, the ability to walk) are built directly into our brains – they are instincts. No one had to teach you to walk (despite what your parents might think!). Kids start walking on their own. Walking is an instinct. Probably the most controversial claim that Noam Chomsky has made is that parts of i-language are also an instinct. That is many parts of i-language are built in, or innate. Chomsky claims that much of i- language is an ability hard-wired into our brains.

Obviously, particular languages (e-languages) are not innate. It is never the case that a child of Slovak parents growing up in North America who has never been spoken to in Slovak grows up speaking Slovak. They’ll speak English (or whatever other language is spoken around them). So, on the surface it seems crazy to claim that language is an instinct. But when we are talking about i-languages, there are very good reasons to believe, however, that a human facility (the Human Language Capacity) for language is innate. We call the innate parts of the HLC, Universal Grammar (or UG).

6.3 The Logical Problem of Language Acquisition

What follows is a fairly technical proof of the idea that parts of our linguistic system are at least plausibly construed as an innate, in-built system. If you aren’t interested in this proof (and the problems with it), then you can reasonably skip ahead to section 6.4.

The argument in this section is that a productive system like the rules of Language probably could not be learned or acquired. Infinite systems are in principle, given certain assumptions, both unlearnable and unacquirable. Since we’ll show that syntax is an infinite system, we shouldn’t have been able to acquire it. So it follows that it is built in. The argument presented here is based on an unpublished paper by Alec Marantz, but is based on an argument dating back to at least Chomsky (1965).

1 First here’s a sketch of the proof, which takes the classical form of an argument by modus ponens:

Premise (i): Syntax is a productive, recursive and infinite system.

Premise (ii):Rule-governed infinite systems are unacquirable.

Conclusion: Therefore syntax is an unacquirable system. Since we have such a system, it follows that at least parts of syntax are innate.

There are parts of this argument that are very controversial. In the challenge problem sets at the end of this chapter you are invited to think very critically about the form of this proof. Challenge Problem Set CPS8 considers the possibility that premise (i) is false (but hopefully you will conclude that, despite the argument given in the problem set, the idea that Language is productive and infinite is correct). Premise (ii) is more dubious, and is the topic of Challenge Problem Set CPS9. Here, in the main body of the text, I will give you the classic versions of the support for these premises, without criticizing them. You are invited to be skeptical and critical of them when you do the Challenge Problem sets.

Let’s start with premise (i): i-language is a productive system. That is, you can produce and understand sentences you have never heard before. For example, I can practically guarantee that you have never heard or read the following sentence:

18) The dancing chorus-line of elephants broke my television set.

The magic of syntax is that it can generate forms that have never been produced before. Another example of this productive quality lies in what is called recursion. It is possible to utter a sentence like (19):

19) Rosie reads magazine articles.

It is also possible to put this sentence inside another sentence, like (20):

20) I think [Rosie reads magazine articles].

Similarly, you can put this larger sentence inside of another one:

21) Drew believes [I think [Rosie reads magazine articles]]

And, of course, you can put this bigger sentence inside of another one:

22) Dana doubts that [Drew believes [I think [Rosie reads magazine articles]]].

and so on, and so on ad infinitum. It is always possible to embed a sentence inside of a larger one. This means that i-language is a productive (probably infinite) system. There are no limits on what we can talk about. Other examples of the productivity of syntax can be seen in the fact that you can infinitely repeat adverbs (23) and you can infinitely add coordinated nouns to a noun phrase (24):

23)

1 a very big peanut

2 a very very big peanut

3 a very very very big peanut

4 a very very very very big peanut etc.

24)

1 Dave left

2 Dave and Alina left

3 Dave, Dan, and Alina left

4 Dave, Dan, Erin, and Alina left

5 Dave, Dan, Erin, Jaime, and Alina left etc.

It follows that for every grammatical sentence of English, you can find a longer one (based on one of the rules of recursion, adverb repetition, or coordination). This means that the production of syntax is at least countably infinite. This premise is relatively uncontroversial (however, see the discussion in Challenge Problem Set CPS8).

Let’s now turn to premise (ii), the idea that infinite systems are unlearnable. In order to make this more concrete, let us consider an algebraic treatment of a linguistic example. Imagine that the task of a child is to determine the rules by which her language is constructed. Further, let’s simplify the task, and say a child simply has to match up situations in the real world with utterances she hears.9 So upon hearing the utterance the cat spots the kissing fishes, she identifies it with an appropriate situation in the context around her (as represented by the picture).

25) “the cat spots the kissing fishes”


Her job, then, is to correctly match up the sentence with the situation.10 More crucially she has to make sure that she does not match it up with all the other possible alternatives, such as the things going on around her (like her older brother kicking the furniture or her father making breakfast for her, etc.). This matching of situations with expressions is a kind of mathematical relation (or function) that maps sentences onto particular situations. Another way of putting it is that she has to figure out the rule(s) that decode(s) the meaning of the sentences. It turns out that this task is at least very difficult, if not impossible.

Let’s make this even more abstract to get at the mathematics of the situation. Assign each sentence some number. This number will represent the input to the rule. Similarly, we will assign each situation a number. The function (or rule) modeling language acquisition maps from the set of sentence numbers to the set of situation numbers. Now let’s assume that the child has the following set of inputs and correctly matched situations (perhaps explicitly pointed out to her by her parents). The x value represents the sentence she hears. The y is the number correctly associated with the situation.

26) Sentence (input) Situation (output)
x y
1 1
2 2
3 3
4 4
5 5

Given this input, what do you suppose that the output where x = 6 will be?

6 ?

Most people will jump to the conclusion that the output will be 6 as well. That is, they assume that the function (the rule) mapping between inputs and outputs is x = y. But what if I were to tell you that in the hypothetical situation I envision here, the correct answer is situation number 126? The rule that generated the table in (26) is actually:

27) [(x – 5)*(x – 4)*(x – 3)*(x – 2)*(x – 1)] + x = y

With this rule, all inputs equal to or less than 5 will give an output equal to the input, but for all inputs greater than 5, they will give some large number.

When you hypothesized the rule was x = y, you didn’t have all the crucial information; you only had part of the data. This seems to mean that if you hear only the first five pieces of data in our table then you won’t get the rule, but if you learn the sixth you will figure it out. Is this necessarily the case? Unfortunately not: Even if you add a sixth line, you have no way of being sure that you have the right function until you have heard all the possible inputs. The important information might be in the sixth line, but it might also be in the 7,902,821,123,765th sentence that you hear. You have no way of knowing for sure if you have heard all the relevant data until you have heard them all. In an infinite system, you can’t hear all the data, even if you were to hear 1 new sentence every 10 seconds for your entire life. If we assume the average person lives to be about 75 years old, if they heard one new sentence every 10 seconds, ignoring leap years and assuming they never sleep, they’d have only heard about 39,420,000 sentences over their lifetime. This is a much smaller number than infinity. Despite this poverty of input, by the age of 5 most children are fairly confident with their use of complicated syntax. Productive systems are (possibly) unlearnable, because you never have enough input to be sure you have all the relevant facts. This is called the logical problem of language acquisition.

Generative grammar gets around this logical puzzle by claiming that the child acquiring English, Irish, or Yoruba has some help: a flexible blueprint to use in constructing her knowledge of language called Universal Grammar. Universal Grammar restricts the number of possible functions that map between situations and utterances, thus making language learnable.

You now have enough information to try CPS8 & 9.

6.4 Other Arguments for UG

The evidence for UG doesn’t rely on the logical problem alone, however. There are many other arguments that support the hypothesis that at least a certain amount of language is built in.

An argument that is directly related to the logical problem of language acquisition discussed above has to do with the fact that we know things about the grammar of our language that we couldn’t possibly have learned. Start with the data in (28). A child might plausibly have heard sentences of these types (the underline represents the place where the question word who might start out – that is, as either the object or the subject of the verb will question):

28)

a) Who do you think that Siobhan will question _____ first?
b) Who do you think Siobhan will question _____ first?
c) Who do you think _____ will question Seamus first?

The child has to draw a hypothesis about the distribution of the word that in English sentences. One conclusion consistent with these observed data is that the word that in English is optional. You can either have it or not. Unfortunately this conclusion is not accurate. Consider the fourth sentence in the paradigm in (28). This sentence is the same as (28c) but with a that:

d) *Who do you think that _____ will question Seamus first?

It appears as if that is only optional when the question word (who in this case) starts in object position (as in 28a and b). It is obligatorily absent when the question word starts in subject position (as in 28c and d) (don’t worry about the details of this generalization). What is important to note is that no one has ever taught you that (28d) is ungrammatical. Nor could you have come to that conclusion on the basis of the data you’ve heard. The logical hypothesis on the basis of the data in (28a–c) predicts sentence (28d) to be grammatical. There is nothing in the input a child hears that would lead them to the conclusion that (28d) is ungrammatical, yet every English-speaking child knows it is. One solution to this conundrum is that we are born with the knowledge that sentences like (28d) are ungrammatical.11 This kind of argument is often called the poverty of the stimulus argument for UG.

Most parents raising a toddler will swear up and down that they are teaching their child to speak and that they actively engage in instructing their child in the proper form of the language. The claim that overt instruction by parents plays any role in language development is easily falsified. The evidence from the experimental language acquisition literature is very clear: parents, despite their best intentions, do not, for the most part, correct ungrammatical utterances by their children. More generally, they correct the content rather than the form of their child’s utterances (see for example the extensive discussion in Holzman 1997).

29) (from Marcus et al. 1992)

Adult: Where is that big piece of paper I gave you yesterday?

Child: Remember? I writed on it.

Adult: Oh that’s right, don’t you have any paper down here, buddy?

When a parent does try to correct a child’s sentence structure, it is more often than not ignored by the child:

30) (from Pinker 1995: 281 – attributed to Martin Braine)

Child: Want other one spoon, Daddy.

Adult: You mean, you want the other spoon.

Child: Yes, I want other one spoon, please, Daddy.

Adult: Can you say “the other spoon”?

Child: Other … one … spoon.

Adult: Say “other”.

Child: Other.

Adult: “Spoon”.

Child: Spoon.

10 Adult: “Other … spoon”.

11 Child: Other … spoon. Now give me other one spoon?

This humorous example is typical of parental attempts to “instruct” their children in language. When these attempts do occur, they fail. However, children still acquire language in the face of a complete lack of instruction. Perhaps one of the most convincing explanations for this is UG. In the problem set part of this chapter, you are asked to consider other possible explanations and evaluate which are the most convincing.

There are also typological arguments for the existence of an innate language faculty. All the languages of the world share certain properties (for example they all have subjects and predicates – other examples will be seen throughout the rest of this book). These properties are called universals of Language. If we assume UG, then the explanation for these language universals is straightforward – they exist because all speakers of human languages share the same basic innate materials for building their language’s grammar. In addition to sharing many similar characteristics, recent research into Language acquisition has begun to show that there is a certain amount of consistency cross- linguistically in the way children acquire Language. For example, children seem to go through the same stages and make the same kinds of mistakes when acquiring their language, no matter what their cultural or linguistics background.

Derek Bickerton (1984) has noted the fact that creole languages12 have a lot of features in common with one another, even when they come from very diverse places in the world and spring forth from unrelated languages. For example, they all have SVO order; they all lack non-specific indefinite articles; they all use modals or particles to indicate tense, mood, and aspect, and they have limited verbal inflection, and many other such similarities. Furthermore these properties are ones that are found in the speech of children of non-creole languages. Bickerton hypothesizes that these properties are a function of an innate language bioprogram, an idea similar to Chomsky’s Universal Grammar.

Finally, there are a number of biological arguments in favor of UG. As noted above, language seems to be both human-specific and pervasive across the species. All humans, unless they have some kind of physical impairment, seem to have language as we know it. This points towards it being a genetically endowed instinct. Additionally, research from neurolinguistics seems to point towards certain parts of the brain being linked to specific linguistic functions.

With very few exceptions, most generative linguists believe that some i-language is innate. What is of controversy is how much is innate and whether the innateness is specific to language, or follows from more general innate cognitive functions. We leave these questions unanswered here.

You now have enough information to try GPS7 & 8 and CPS10.

Syntax

Подняться наверх