Читать книгу Philosophical Foundations of Neuroscience - P. M. S. Hacker - Страница 37
3.4 Replies to Objections
ОглавлениеReply to Ullman’s objection that neuroscientists are using the psychological vocabulary in a special technical sense
With regard to the misuse of the psychological vocabulary involved in ascribing psychological predicates to the brain, all the evidence points to the fact that neuroscientists are not using these terms in a special sense. Far from being new homonyms, the psychological expressions they use are being invoked in their customary sense, otherwise the neuroscientists would not draw the inferences from them which they do draw. When Crick asserts that ‘what you see is not what is really there; it is what your brain believes is there’, it is important that he takes ‘believes’ to have its normal connotations – that it does not mean the same as some novel term ‘believes*’. For it is part of Crick’ s tale that the belief is the outcome of an interpretation based on previous experience and information (and not the outcome of an interpretation* based on previous experience* and information*). When Semir Zeki remarks that the acquisition of knowledge is a ‘primordial function of the brain’,24 he means knowledge, not knowledge* – otherwise he would not think that it is the task of future neuroscience to solve the problems of epistemology (but only, presumably, of epistemology*). Similarly, when Young talks of the brain’ s containing knowledge and information, which is encoded in the brain ‘just as knowledge can be recorded in books or computers’,25 he means knowledge, not knowledge* – since it is knowledge and information, not knowledge* and information*, that can be recorded in books and computers. When Milner, Squire and Kandel talk of ‘declarative memory’, they explain that this phrase signifies ‘what is ordinarily meant by the term “memory”’,26 but then go on to declare that such memories, not memories*, are ‘stored in the brain’. That presupposes that it makes sense to speak of storing memories (in the ordinary sense of the word) in the brain (for detailed discussion of this questionable claim, see §6.2.2 below).
Reply to Ullman: David Marr on ‘representations’
The accusation of committing the mereological fallacy cannot be that easily rebutted. But Simon Ullman may appear to be on stronger grounds when it comes to talk of internal representations and symbolic representations (as well as maps) in the brain. If ‘representation’ does not mean what it ordinarily does, if ‘symbolic’ has nothing to do with symbols, then it may indeed be innocuous to speak of there being internal, symbolic representations in the brain. (And if ‘maps’ have nothing to do with atlases, but only with mappings, then it may also be innocuous to speak of there being maps in the brain.) It is extraordinarily ill advised to multiply homonyms, but it need involve no conceptual incoherence, as long as the scientists who use these terms thus do not forget that the terms do not have their customary meaning. Unfortunately, they typically do forget this and proceed to cross the new use with the old, generating incoherence. Ullman, defending Marr, insists (perfectly correctly) that certain brain events can be viewed as representations* of depth or orientation or reflectance;27 that is, that one can correlate certain neural firings with features in the visual field (denominating the former ‘representations*’ of the latter). But it is evident that this is not all that Marr meant. He claimed that numeral systems (roman or arabic numerals, binary notation) are representations. However, such notations have nothing to do with causal correlations, but with representational conventions. He claimed that ‘a representation for shape would be a formal scheme for describing some aspects of shape, together with rules that specify how the scheme is applied to any particular shape’,28 that a formal scheme is ‘a set of symbols with rules for putting them together’,29 and that ‘a representation, therefore, is not a foreign idea at all – we all use representations all the time. However, the notion that one can capture some aspect of reality by making a description of it using a symbol and that to do so can be useful seems to me to be a powerful and fascinating idea.’30 But the sense in which we ‘use representations all the time’, in which representations are rule-governed symbols, and in which they are used for describing things, is the semantic sense of ‘representation’ – not a new homonymical causal sense. Marr has fallen into a trap of his own making.31 He in effect conflates Ullman’s representations*, which are causal correlates, with linguistic representations, which are symbols or symbol systems with a syntax and meaning determined by conventions.
Reply to Ullman: Young on ‘maps’ and Frisby on ‘symbolic representations’
Similarly, it would be misleading, but otherwise innocuous, to speak of maps in the brain when what is meant is that certain features of the visual field can be mapped on to the firings of groups of cells in the ‘visual’ striate cortex. But then one cannot go on to say, as Young does, that the brain makes use of its maps in formulating its hypotheses about what is visible. So, too, it would be innocuous to speak of there being symbolic representations in the brain, as long as ‘symbolic’ has nothing to do with semantic meaning, but signifies only ‘natural meaning’ (as in ‘smoke means fire’). But then one cannot go on to say, as Frisby does, that ‘there must be a symbolic description in the brain of the outside world, a description cast in symbols which stand for the various aspects of the world of which sight makes us aware’.32 For this use of ‘symbol’ is evidently semantic. For while smoke means fire, inasmuch as it is a sign of fire (an inductively correlated indication), it is not a sign for fire. Smoke rising from a distant hillside is not a description of fire cast in symbols, and the firing of neurons in the ‘visual’ striate cortex is not a symbolic description of objects in the visual field, even though a neuroscientist may be able to infer facts about what is visible to an animal from his knowledge of what cells are firing in its ‘visual’ striate cortex. The firing of cells in V1 may be signs of a figure with certain line orientations in the animal’ s visual field, but they do not stand for anything, they are not symbols, and they do not describe anything.
Reply to the second objection (Gregory) that, in ascribing psychological attributes to the brain, neuroscientists are not committing the mereological fallacy, but merely extending the psychological vocabulary analogically
The thought that neuroscientific usage, far from being conceptually incoherent, is innovative, extending the psychological vocabulary in novel ways, might seem to offer another way of side-stepping the accusation that neuroscientists’ descriptions of their discoveries commonly transgress the bounds of sense. It is indeed true that analogies are a source of scientific insight. The hydrodynamical analogy proved fruitful in the development of the theory of electricity, even though electrical current does not flow in the same sense as water flows, and an electrical wire is not a kind of pipe. The moot question is whether the application of the psychological vocabulary to the brain is to be understood as analogical.
The prospects do not look good. The application of psychological expressions to the brain is not part of a complex theory replete with functional, mathematical relationships expressible by means of quantifiable laws as are to be found in the theory of electricity. Something much looser seems to be needed. So, it is true that psychologists, following Freud and others, have extended the concepts of belief, desire and motive in order to speak of unconscious beliefs, desires and motives. When these concepts undergo such analogical extension, something new stands in need of explanation. The newly extended expressions no longer admit of the same combinatorial possibilities as before. They have a different, importantly related meaning, and one which requires explanation. The relationship between a (conscious) belief and an unconscious belief, for example, is not akin to the relationship between a visible chair and an occluded chair – it is not ‘just like a conscious belief only unconscious’, but more like the relationship between √1 and √–1. But when neuroscientists such as Sperry and Gazzaniga speak of the left hemisphere making choices, of its generating interpretations, of its knowing, observing and explaining things – it is clear from the sequel that these psychological expressions have not been given a new meaning. Otherwise it would not be said that a hemisphere of the brain is ‘a conscious system in its own right, perceiving, thinking, remembering, reasoning, willing and emoting, all at a characteristically human level’.33
It is not semantic inertia that motivates our claim that neuroscientists are involved in various forms of conceptual incoherence. It is, rather, the acknowledgement of the requirements of the logic of psychological expressions. Psychological predicates are predicable only of an animal as a whole, not of its parts. No conventions have been laid down to determine what is to be meant by the ascription of such predicates to a part of an animal, in particular to its brain. So the application of such predicates to the brain or the hemispheres of the brain transgresses the bounds of sense. The resultant assertions are not false, for to say that something is false we must have some idea of what it would be for it to be true – in this case, we should have to know what it would be for the brain to think, reason, see and hear, etc., and to have found out that as a matter of fact the brain does not do so. But we have no such idea, and these assertions are not false. Rather, the sentences in question lack sense. This does not mean that they are silly or stupid. It means that no sense has been assigned to such forms of words, and that, accordingly, they say nothing at all, even though it looks as if they do.
Reply to Blakemore’s objection that applying psychological predicates tothe brain is merely metaphorical
The fourth methodological objection was raised by Colin Blakemore. Of Wittgenstein’ s remark that ‘only of a living human being and what resembles (behaves like) a living human being can one say: it has sensations; it sees; is blind; hears; is deaf; is conscious or unconscious’, Blakemore observes that it ‘seems trivial, maybe just plain wrong’. Addressing the accusation that neuroscientists’ talk of there being ‘maps’ in the brain is pregnant with possibilities of confusion (since all that can be meant is that one can map, for example, aspects of items in the visual field on to the firing of cells in the ‘visual’ striate cortex), Blakemore notes that there is overwhelming evidence for ‘topographic patterns of activity’ in the brain.
Since Hughlings Jackson’ s time, the concept of functional sub-division and topographic representation has become a sine qua non of brain research. The task of charting the brain is far from complete but the successes of the past make one confident that each part of the brain (and especially the cerebral cortex) is likely to be organized in a spatially ordered fashion. Just as in the decoding of a cipher, the translation of Linear B or the reading of hieroglyphics, all that we need to recognize the order in the brain is a set of rules – rules that relate the activity of the nerves to events in the outside world or in the animal’ s body.34
To be sure, the term ‘representation’ here signifies merely systematic causal connectedness. That is innocuous enough. But it must not be confused with the sense in which a linguistic item (e.g. a presentation) can be said to represent something fairly or unfairly, a map to represent that of which it is a map, or a painting to represent that of which it is a painting. Nevertheless, such ambiguity in the use of ‘representation’ is perilous, since it is likely to lead to a confusion of the distinct senses. Just how confusing it can be is evident in Blakemore’ s further observations:
Faced with such overwhelming evidence for topographic patterns of activity in the brain it is hardly surprising that neurophysiologists and neuroanatomists have come to speak of the brain having maps, which are thought to play an essential part in the representation and interpretation of the world by the brain, just as the maps of an atlas do for the reader of them. The biologist J. Z. Young writes of the brain having a language of a pictographic kind: ‘What goes on in the brain must provide a faithful representation of events outside it, and the arrangements of the cells in it provide a detailed model of the world. It communicates meanings by topographical analogies.’35 But is there a danger in the metaphorical use of such terms as ‘language’, ‘grammar’, and ‘map’ to describe the properties of the brain? … I cannot believe that any neurophysiologist believes that there is a ghostly cartographer browsing through the cerebral atlas. Nor do I think that the employment of common language words (such as map, representation, code, information and even language) is a conceptual blunder of the kind [imagined]. Such metaphorical imagery is a mixture of empirical description, poetic license and inadequate vocabulary.36
Whether there is any danger in a metaphorical use of words depends on how clear it is that it is merely metaphorical, and on whether the author remembers that that is all it is. Whether neuroscientists’ ascriptions to the brain of attributes that can be applied literally only to an animal as a whole is actually merely metaphorical (metonymical or synecdochical) is very doubtful. Of course, neurophysiologists do not think that there is a ‘ghostly cartographer’ browsing through a cerebral atlas – but they do think that the brain makes use of the maps. According to Young, the brain constructs hypotheses, and it does so on the basis of this ‘topographically organized representation’.37 The moot question is: what inferences do neuroscientists draw from their claim that there are maps or representations in the brain, or from their claim that the brain contains information, or from talk (J. Z. Young’ s talk) of ‘languages of the brain’? These alleged metaphorical uses are so many banana skins in the pathway of their user. He need not step on them and slip, but he probably will.
Blakemore’s confusion
Just how easy it is for confusion to ensue from what is alleged to be harmless metaphor is evident in the paragraph of Blakemore quoted above. For while it may be harmless to talk of ‘maps’ – that is, of mappings of features of the perceptual field on to topographically related groups of cells that are systematically responsive to such features – it is anything but harmless to talk of such ‘maps’ as playing ‘an essential part in the representation and interpretation of the world by the brain, just as the maps of an atlas do for the reader of them’ (our italics). In the first place, it is not clear what sense is to be given to the term ‘interpretation’ in this context. For it is by no means evident what could be meant by the claim that the topographical relations between groups of cells that are systematically related to features of the perceptual field play an essential role in the brain’ s interpreting something. To interpret, literally speaking, is to explain the meaning of something, or to take something that is ambiguous to have one meaning rather than another. But it makes no sense to suppose that the brain explains anything, or that it apprehends something as meaning one thing rather than another. If we look to J. Z. Young to find out what he had in mind, what we find is the claim that it is on the basis of such maps that the brain ‘constructs hypotheses and programs’ – and this only gets us deeper into the morass.
More importantly, whatever sense we can give to Blakemore’ s claim that ‘brain maps’ (which are not actually maps) play an essential part in the brain’ s ‘representation and interpretation of the world’, it cannot be ‘just as the maps of an atlas do for the reader of them’. For a map is a pictorial representation, made in accordance with conventions of mapping and rules of projection. Someone who can read an atlas must know and understand these conventions, and read off, from the maps, the features of what is represented. But the ‘maps’ in the brain are not maps, in this sense, at all. The brain is not akin to the reader of a map, since it cannot be said to know any conventions of representation or methods of projection or to read anything off the topographical arrangement of firing cells in accordance with a set of conventions. For the cells are not arranged in accordance with conventions at all, and the correlation between their firing and features of the perceptual field is not a conventional but a causal one.38
Blakemore’ s suggestion that neuroscientists use metaphorical and figurative language because of the poverty of the English language and lack of adequate concepts is a point which we shall examine later (§17.2).39
Reply to fourth objection (Searle)
Searle indeed drew our attention to an interesting conceptual complexity, which is worth disentangling if one is to avoid his confusions. It is a subtle matter, involving the important distinction between the body a human being is and the body a human being has. A human being is a sentient, living spatio-temporal substance (Homo sapiens) consisting of flesh and bones, with the power of self-movement as well as intellectual powers of reason and will. We also use the idiom of having a body (e.g. one may have a beautiful, lithe, athletic, powerful, aged, frail body), which idiom is used to speak of somatic characteristics of the human being we are.40 Everything true of the body we have is true of the body we are, but not vice versa. If my body is dirty, covered with scratches, and sunburnt, then I am dirty, covered with scratches and sunburnt. But if I am thinking of relativity theory, remembering last year’ s holiday, and wondering what to do, my body – the body I have – is neither thinking, remembering, nor wondering, since these are not somatic properties.
Searle suggested that if ascribing psychological attributes to the brain really were a mereological error, then it would vanish if one ascribed them ‘to the rest of the system’ to which the brain belongs. The ‘rest of the system’, he held, is the body that a human being has. He observed that we do not ascribe psychological attributes to our body. With the striking exception of verbs of sensation (e.g. ‘My body aches//itches//hurts//all over’, as well as ‘You have hurt my foot’) this is correct. We do not say ‘My body perceives, thinks and knows’, nor do we say ‘My body has a pain in its foot’, let alone ‘My body has a pain in my foot’ or ‘You have given my foot a pain’. But the ‘system’ to which the human brain can be said to belong is the human being. The human brain is part of a human being, just as the canine brain is part of a dog. My brain, the brain I have, is as much a part of me – of the living human being that I am – as my legs and arms are parts of me.
Human beings are (actually or potentially) persons, that is, they are intelligent, language-using animals that are self-conscious, possess knowledge of good and evil, are free and responsible for their deeds and have rights and duties. To be a person is, roughly speaking, to possess such abilities as qualify one for the status of a moral agent. It is striking that we would probably not say that the brain is part of the person, but rather that it is a part of the human being who is the person or that it is part of the person’ s body. To have a brain, one might say, is a somatic feature of a human being. Interestingly, we would not hesitate to say that Jack’ s brain is part of Jack, part of this human being, just as his arms and legs are parts of Jack. Why this hesitation or reluctance to aver that the brain is a part of a person? Perhaps because ‘person’ is, as Locke stressed, a ‘forensic term’, but not a substance-name like ‘cat’, ‘dog’ and ‘human being’. So, if we use the term ‘person’ in such contexts as this, we indicate thereby that we are concerned primarily with human beings qua possessors of those characteristics that render them persons, in relative disregard of corporeal characteristics. Perhaps this analogy will help: Paris is a part of France, France belongs to the European Union, but Paris does not. That does not prevent Paris from being a part of France. So too, Jack’ s being a person does not prevent his brain being a part of him.
Reply to fifth objection (Dennett) that there is no mereological fallacy, but rather a confusion between mechanical processes of the brain and non-mechanical mental processes
The mereological fallacy of illicitly attributing properties of wholes to their constituent parts has nothing to do with the distinction (or distinctions) between mechanical and non-mechanical processes. It is the bracket clock as a whole that keeps time, not its fusée, although the process of keeping time is wholly mechanical. It is the aeroplane as a whole (machine) that flies, not its engines, although the process of flying is wholly mechanical. It is a mereological mistake to contend that fusées keep time or that engines fly. Moreover, as we noted, verbs of sensation, such as ‘hurts’, ‘itches’, ‘tickles’ do apply to parts of an animal (e.g. ‘My throat felt sore’, ‘His leg hurt’, ‘Her head ached’), even though they are non-mechanical. So the applicability or inapplicability of psychological or mental predicates to parts of an animal has nothing to do with what is and what is not ‘mechanical’.
One might concede that it is mistaken to ascribe certain predicates of wholes to parts of the whole, but nevertheless insist that it is fruitful to extend the psychological vocabulary from human beings and other animals to (a) computers (that are ‘mechanical’) and (b) to parts of the brain (that are ‘sub-personal’).41 Note the difference between (a) and (b). Attributing psychological properties to machines (in the context of a discussion of whether machines can think) is mistaken, but does not involve a mereological fallacy of any kind. Attributing psychological properties to the brain or its parts is mistaken and does involve a mereological fallacy. Taking the brain to be a computer42 and ascribing psychological properties to it or its parts is therefore doubly mistaken.
It is true that we do, in casual parlance, say that computers remember, that they search their memory, that they calculate, and sometimes, when they take a long time, we jocularly say that they are thinking things over. But this is not a literal application of the terms ‘remember’, ‘calculate’ and ‘think’. Computers are devices designed to fulfil certain functions for us. We can store information on a computer, as we can in a filing cabinet. But filing cabinets cannot remember anything, and neither can computers. We use computers to produce the results of a calculation, just as we used to use slide-rules or cylindrical mechanical calculators. Those results are produced without anyone or anything literally calculating – as is evident in the cases of a slide-rule or mechanical calculator. In order literally to calculate, one must have a grasp of a wide range of concepts, follow a multitude of rules that one must know, and understand a variety of operations. One must view the results of one’s calculation as warranted by the premises. Computers do not and cannot.