Читать книгу Bioethics - Группа авторов - Страница 12
Introduction
ОглавлениеThe term “bioethics” is often mistakenly ascribed to the biologist Van Rensselaer Potter, who used it in the 1970s to describe his proposal that we need an ethic that can incorporate our obligations, not just to other humans, but to the biosphere as a whole.1 However, a historically correct account should probably give credit for coining the term to Fritz Jahr, a German Protestant pastor, who in 1927 published an article called “Bio‐Ethics: A Review of the Ethical Relationships of Humans to Animals and Plants.”2 Jahr tried to establish “bioethics” both as a discipline and as a moral principle. Although the term is still occasionally used in the sense of an ecological ethic, it is now much more commonly used in the narrower sense of the study of ethical issues arising from the biological and medical sciences. So understood, bioethics has become a specialized, although interdisciplinary, area of study. The essays included in this book give an indication of the range of issues which fall within its scope – but it is only an indication. There are many other issues that we simply have not had the space to cover.
Bioethics can be seen as a branch of ethics, or, more specifically, of applied ethics. For this reason some understanding of the nature of ethics is an essential preliminary to any serious study of bioethics. The remainder of this introduction will seek to provide that understanding.
One question about the nature of ethics is especially relevant to bioethics: to what extent is reasoning or argument possible in ethics? Many people assume without much thought that ethics is subjective. The subjectivist holds that what ethical view we take is a matter of opinion or taste that is not amenable to argument. But if ethics were a matter of taste, why would we even attempt to argue about it? If Helen says “I like my coffee sweetened,” whereas Paul says “I like my coffee unsweetened,” there is not much point in Helen and Paul arguing about it. The two statements do not contradict each other. They can both be true. But if Helen says “Doctors should never assist their patients to die” whereas Paul says “Sometimes doctors should assist their patients to die,” then Helen and Paul are disagreeing, and there does seem to be a point in their trying to argue about the issue of physician‐assisted suicide.
It seems clear that there is some scope for argument in ethics. If I say “It is always wrong to kill a human being” and “Abortion is not always wrong,” then I am committed to denying that abortion kills a human being. Otherwise I have contradicted myself, and in doing so I have not stated a coherent position at all. So consistency, at least, is a requirement of any defensible ethical position, and thus sets a limit to the subjectivity of ethical judgments. The requirement of factual accuracy sets another limit. In discussing issues in bioethics, the facts are often complex. But we cannot reach the right ethical decisions unless we are well‐informed about the relevant facts. In this respect ethical decisions are unlike decisions of taste. We can enjoy a taste without knowing what we are eating; but if we assume that it is wrong to resuscitate a terminally ill patient against her wishes, then we cannot know whether an instance of resuscitation was morally right or wrong without knowing something about the patient’s prognosis and whether the patient has expressed any wishes about being resuscitated. In that sense, there is no equivalent in ethics to the immediacy of taste.
Ethical relativism, sometimes also known as cultural relativism, is one step away from ethical subjectivism, but it also severely limits the scope of ethical argument. The ethical relativist holds that it is not individual attitudes that determine what is right or wrong, but the attitudes of the culture in which one lives. Herodotus tells how Darius, King of Persia, summoned the Greeks from the western shores of his kingdom before him, and asked them how much he would have to pay them to eat their fathers’ dead bodies. They were horrified by the idea and said they would not do it for any amount of money, for it was their custom to cremate their dead. Then Darius called upon Indians from the eastern frontiers of his kingdom, and asked them what would make them willing to burn their fathers’ bodies. They cried out and asked the King to refrain from mentioning so shocking an act. Herodotus comments that each nation thinks its own customs best. From here it is only a short step to the view that there can be no objective right or wrong, beyond the bounds of one’s own culture. This view found increased support in the nineteenth century as Western anthropologists came to know many different cultures, and were impressed by ethical views very different from those that were standardly taken for granted in European society. As a defense against the automatic assumption that Western morality is superior and should be imposed on “savages,” many anthropologists argued that, since morality is relative to culture, no culture can have any basis for regarding its morality as superior to any other culture.
Although the motives with which anthropologists put this view forward were admirable, they may not have appreciated the implications of the position they were taking. The ethical relativist maintains that a statement like “It is good to enslave people from another tribe if they are captured in war” means simply “In my society, the custom is to enslave people from another tribe if they are captured in war.” Hence if one member of the society were to question whether it really was good to enslave people in these circumstances, she could be answered simply by demonstrating that this was indeed the custom – for example, by showing that for many generations it had been done after every war in which prisoners were captured. Thus there is no way for moral reformers to say that an accepted custom is wrong – “wrong” just means “in accordance with an accepted custom.”
On the other hand, when people from two different cultures disagree about an ethical issue, then according to the ethical relativist there can be no resolution of the disagreement. Indeed, strictly there is no disagreement. If the apparent dispute were over the issue just mentioned, then one person would be saying “In my country it is the custom to enslave people from another tribe if they are captured in war” and the other person would be saying “In my country it is not the custom to allow one human being to enslave another.” This is no more a disagreement than such statements as “In my country people greet each other by rubbing noses” and “In my country people greet each other by shaking hands.” If ethical relativism is true, then it is impossible to say that one culture is right and the other is wrong. Bearing in mind that some cultures have practiced slavery, or the burning of widows on the funeral pyre of their husbands, this is hard to accept.
A more promising alternative to both ethical subjectivism and cultural relativism is universal prescriptivism, an approach to ethics developed by the Oxford philosopher R. M. Hare. Hare argues that the distinctive property of ethical judgments is that they are universalizable. In saying this, he means that if I make an ethical judgment, I must be prepared to state it in universal terms, and apply it to all relevantly similar situations. By “universal terms” Hare means those terms that do not refer to a particular individual. Thus a proper name cannot be a universal term. If, for example, I were to say “Everyone should do what is in the interests of Kim Kardashian,” I would not be making a universal judgment, because I have used a proper name. The same would be true if I were to say that everyone must do what is in my interests, because the personal pronoun “my” is here used to refer to a particular individual, myself.
It might seem that ruling out particular terms in this way does not take us very far. After all, one can always describe oneself in universal terms. Perhaps I can’t say that everyone should do what is in my interests, but I could say that everyone must do whatever is in the interests of people who … and then give a minutely detailed description of myself, including the precise location of all my freckles. The effect would be the same as saying that everyone should do what is in my interests, because there would be no one except me who matches that description. Hare meets this problem by saying that to prescribe an ethical judgment universally means being prepared to prescribe it for all possible circumstances, including hypothetical ones. So if I were to say that everyone should do what is in the interests of a person with a particular pattern of freckles, I must be prepared to prescribe that in the hypothetical situation in which I do not have this pattern of freckles, but someone else does, I should do what is in the interests of that person. Now of course I may say that I should do that, since I am confident that I shall never be in such a situation, but this simply means that I am being dishonest. I am not genuinely prescribing the principle universally.
The effect of saying that an ethical judgment must be universalizable for hypothetical as well as actual circumstances is that whenever I make an ethical judgment, I can be challenged to put myself in the position of the parties affected, and see if I would still be able to accept that judgment. Suppose, for example, that I own a small factory and the cheapest way for me to get rid of some waste is to pour it into a nearby river. I do not take water from this river, but I know that some villagers living downstream do and the waste may make them ill. If I imagine myself in the hypothetical situation of being one of the villagers, rather than the factory‐owner, I would not accept that the profits of the factory‐owner should outweigh the risk of adverse effects on my health and that of my children. Hence I cannot claim that I am ethically justified in polluting the river.
In this way Hare’s approach introduces an element of reasoning in ethical deliberation. For Hare, however, since universalizability is part of the logic of moral language, an amoralist can avoid it by simply avoiding making any ethical judgments. More recently, several prominent moral philosophers, among them Thomas Nagel, T.M. Scanlon, and Derek Parfit have defended the view that we have objective reasons for action. Ethical judgments, in their view, are not statements of fact, but can nevertheless be true or false, in the same way that the truths of logic, or mathematics, are not statements of fact, but can be true or false. It is true, they would argue, that if someone is in agony, and we can relieve that agony, we have a reason for doing so. If we can relieve it at no cost, or a very low cost, to ourselves or anyone else, we will have a conclusive reason for relieving it, and it will be wrong not to do so.
The questions we have been discussing so far are questions about ethics, rather than questions within ethics. Philosophers call this “metaethics” and distinguish it from “normative ethics” in which we discuss what we ought to do. Normative ethics can also be divided into two parts, ethical theory and applied ethics. As we noted at the beginning of this introduction, bioethics is an area of applied ethics. Ethical theory, on the other hand, deals with broad ethical theories about how we ought to live and act, and we will now outline some of the more important of these theories.
Consequentialism is the view that the rightness of an action depends on its consequences. The best‐known form of consequentialism is utilitarianism, developed in the late eighteenth century by Jeremy Bentham and popularized in the nineteenth century by John Stuart Mill. They held that an action is right if it leads to a greater surplus of happiness over misery than any possible alternative, and wrong if it does not. By “greater surplus of happiness,” the classical utilitarians had in mind the idea of adding up all the pleasure or happiness that resulted from the action and subtracting from that total all the pain or misery to which the action gave rise. Naturally, in some circumstances, it might be possible only to reduce misery, and then the right action should be understood as the one that will result in less misery than any possible alternative.
The utilitarian view is striking in many ways. It puts forward a single principle that it claims can provide the right answer to all ethical dilemmas, if only we can predict what the consequences of our actions will be. It takes ethics out of the mysterious realm of duties and rules, and bases ethical decisions on something that almost everyone understands and values. Moreover, utilitarianism’s single principle is applied universally, without fear or favor. Bentham said: “Each to count for one and none for more than one.” By that he meant that the happiness of a peasant counted for as much as that of a noble, and the happiness of an African was no less important than that of a European – a progressive view to take when English ships were engaged in the slave trade.
Some contemporary consequentialists agree with Bentham to the extent that they think the rightness or wrongness of an action must depend on its consequences, but they deny that maximizing net happiness is the only consequence that has intrinsic value. Some of them argue that we should seek to bring about whatever will satisfy the greatest number of desires or preference. This variation, which is known as “preference utilitarianism,” does not regard anything as good, except in so far as it is wanted or desired. More intense or strongly held preferences would get more weight than weak preferences. Other consequentialists include independent values, like freedom, justice, and knowledge. They are sometimes referred to as “ideal utilitarians” but it is better to think of them, not as utilitarians at all, but as pluralistic consequentialists (because they hold several independent values, rather than just one).
Consequentialism offers one important answer to the question of how we should decide what is right and what is wrong, but many ethicists reject it. The denial of this view was dramatically presented by Dostoevsky in The Karamazov Brothers:
Imagine that you are charged with building the edifice of human destiny, the ultimate aim of which is to bring people happiness, to give them peace and contentment at last, but that in order to achieve this it is essential and unavoidable to torture just one little speck of creation, that same little child beating her chest with her little fists, and imagine that this edifice has to be erected on her unexpiated tears. Would you agree to be the architect under those conditions? Tell me honestly!3
The passage suggests that some things are always wrong, no matter what their consequences. This has, for most of Western history, been the prevailing approach to morality, at least at the level of what has been officially taught and approved by the institutions of Church and State. The ten commandments of the Hebrew scriptures served as a model for much of the Christian era, and the Roman Catholic Church built up an elaborate system of morality based on rules to which no exceptions were allowed.
Another example of an ethic of rules is that of Immanuel Kant. Kant’s ethic is based on his “Categorical Imperative,” which he states in several distinct formulations. One is that we must always act so that we can will the maxim of our action to be a universal law. This can be interpreted as a form of Hare’s idea of universalizability, which we have already encountered. Another is that we must always treat other people as ends, never as means. While these formulations of the Categorical Imperative might be applied in various ways, in Kant’s hands they lead to inviolable rules, for example, against making promises that we do not intend to keep. Kant also thought that it was always wrong to tell a lie. In response to a critic who suggested that this rule has exceptions, Kant said that it would be wrong to lie even if someone had taken refuge in your house, and a person seeking to murder him came to your door and asked if you knew where he was. Modern Kantians often reject this hardline approach to rules, and claim that Kant’s Categorical Imperative did not require him to hold so strictly to the rule against lying.
How would a consequentialist – for example, a classical utilitarian – answer Dostoevsky’s challenge? If answering honestly – and if one really could be certain that this was a sure way, and the only way, of bringing lasting happiness to all the people of the world – utilitarians would have to say yes, they would accept the task of being the architect of the happiness of the world at the cost of the child’s unexpiated tears. For they would point out that the suffering of that child, wholly undeserved as it is, will be repeated a million fold over the next century, for other children, just as innocent, who are victims of starvation, disease, and brutality. So if this one child must be sacrificed to stop all this suffering then, terrible as it is, the child must be sacrificed.
Fantasy apart, there can be no architect of the happiness of the world. The world is too big and complex a place for that. But we may attempt to bring about less suffering and more happiness, or satisfaction of preferences, for people or sentient beings in specific places and circumstances. Alternatively, we might follow a set of principles or rules – which could be of varying degrees of rigidity or flexibility. Where would such rules come from? Kant tried to deduce them from his Categorical Imperative, which in turn he had reached by insisting that the moral law must be based on formal reason alone, which for him meant the idea of a universal law, without any content from our wants or desires. But the problem with trying to deduce morality from reason alone has always been that it becomes an empty formalism that cannot tell us what to do. To make it practical, it needs to have some additional content, and Kant’s own attempts to deduce rules of conduct from his Categorical Imperative are unconvincing.
Others, following Aristotle, have tried to draw on human nature as a source of moral rules. What is good, they say, is what is natural to human beings. They then contend that it is natural and right for us to seek certain goods, such as knowledge, friendship, health, love, and procreation, and unnatural and wrong for us to act contrary to these goods. This “natural law” ethic is open to criticism on several points. The word “natural” can be used both descriptively and evaluatively, and the two senses are often mixed together so that value judgments may be smuggled in under the guise of a description. The picture of human nature presented by proponents of natural law ethics usually selects only those characteristics of our nature that the proponent considers desirable. The fact that our species, especially its male members, frequently go to war, and are also prone to commit individual acts of violence against others, is no doubt just as much part of our nature as our desire for knowledge, but no natural law theorist therefore views these activities as good. More generally, natural law theory has its origins in an Aristotelian idea of the cosmos, in which everything has a goal or “end,” which can be deduced from its nature. The “end” of a knife is to cut; the assumption is that human beings also have an “end,” and we will flourish when we live in accordance with the end for which we are suited. But this is a pre‐Darwinian view of nature. Since Darwin, we know that we do not exist for any purpose, but are the result of natural selection operating on random mutations over millions of years. Hence there is no reason to believe that living according to nature will produce a harmonious society, let alone the best possible state of affairs for human beings.
Another way in which it has been claimed that we can come to know what moral principles or rules we should follow is through our intuition. In practice this usually means that we adopt conventionally accepted moral principles or rules, perhaps with some adjustments in order to avoid inconsistency or arbitrariness. On this view, a moral theory should, like a scientific theory, try to match the data; and the data that a moral theory must match is provided by our moral intuitions. As in science, if a plausible theory matches most, but not all, of the data, then the anomalous data might be rejected on the grounds that it is more likely that there was an error in the procedures for gathering that particular set of data than that the theory as a whole is mistaken. But ultimately the test of a theory is its ability to explain the data. The problem with applying this model of scientific justification to ethics is that the “data” of our moral intuitions is unreliable, not just at one or two specific points, but as a whole. Here the facts that cultural relativists draw upon are relevant (even if they do not establish that cultural relativism is the correct response to it). Since we know that our intuitions are strongly influenced by such things as culture and religion, they are ill‐suited to serve as the fixed points against which an ethical theory must be tested. Even where there is cross‐cultural agreement, there may be some aspects of our intuitions on which all cultures unjustifiably favor our own interests over those of others. For example, simply because we are all human beings, we may have a systematic bias that leads us to give an unjustifiably low moral status to nonhuman animals. Or, because, in virtually all known human societies, men have taken a greater leadership role than women, the moral intuitions of all societies may not adequately reflect the interests of females.
Some philosophers think that it is a mistake to base ethics on principles or rules. Instead they focus on what it is to be a good person – or, in the case of the problems with which this book is concerned, perhaps on what it is to be a good nurse or doctor or researcher. They seek to describe the virtues that a good person, or a good member of the relevant profession, should possess. Moral education then consists of teaching these virtues and discussing how a virtuous person would act in specific situations. The question is, however, whether we can have a notion of what a virtuous person would do in a specific situation without making a prior decision about what it is right to do. After all, in any particular moral dilemma, different virtues may be applicable, and even a particular virtue will not always give unequivocal guidance. For instance, if a terminally ill patient repeatedly asks a nurse or doctor for assistance in dying, what response best exemplifies the virtues of a healthcare professional? There seems no answer to this question, short of an inquiry into whether it is right or wrong to help a patient in such circumstances to die. But in that case we seem bound, in the end, to come back to discussing such issues as whether it is right to follow moral rules or principles, or to do what will have the best consequences.
In the late twentieth century, some feminists offered new criticisms of conventional thought about ethics. They argued that the approaches to ethics taken by the influential philosophers of the past – all of whom have been male – give too much emphasis to abstract principles and the role of reason, and give too little attention to personal relationships and the part played by emotion. One outcome of these criticisms has been the development of an “ethic of care,” which is not so much a single ethical theory as a cluster of ways of looking at ethics which put an attitude of caring for others at the center, and seek to avoid reliance on abstract ethical principles. The ethic of care has seemed especially applicable to the work of those involved in direct patient care. Not all feminists, however, support this development. Some worry that presenting an ethic of care in opposition to a “male” ethic based on reasoning reflects and reinforces stereotypes of women as more emotional and less rational than men. They also fear that it could lead to women continuing to carry a disproportionate share of the burden of caring for others.
In this discussion of ethics we have not mentioned anything about religion. This may seem odd, in view of the close connection that has often been made between religion and ethics, but it reflects our belief that, despite this historical connection, ethics and religion are fundamentally independent. Logically, ethics is prior to religion. If religious believers wish to say that a deity is good, or praise her or his creation or deeds, they must have a notion of goodness that is independent of their conception of the deity and what she or he does. Otherwise they will be saying that the deity is good, and when asked what they mean by “good,” they will have to refer back to the deity, saying perhaps that “good” means “in accordance with the wishes of the deity.” In that case, sentences such as “God is good” would be a meaningless tautology. “God is good” could mean no more than “God is in accordance with God’s wishes.” As we have already seen, there are ideas of what it is for something to be “good” that are not rooted in any religious belief. While religions typically encourage or instruct their followers to obey a particular ethical code, it is obvious that others who do not follow any religion can also think and act ethically.
To say that ethics is independent of religion is not to deny that theologians or other religious believers may have a role to play in bioethics. Religious traditions often have long histories of dealing with ethical dilemmas, and the accumulation of wisdom and experience that they represent can give us valuable insights into particular problems. But these insights should be subject to criticism in the way that any other proposals would be. If in the end we accept them, it is because we have judged them sound, not because they are the utterances of a pope, a rabbi, a mullah, or a holy person.
Ethics is also independent of the law, in the sense that the rightness or wrongness of an act cannot be settled by its legality or illegality. Whether an act is legal or illegal may often be relevant to whether it is right or wrong, because it is arguably wrong to break the law, other things being equal. Many people have thought that this is especially so in a democracy, in which everyone has a say in making the law. Another reason why the fact that an act is illegal may be a rea‐ son against doing it is that the legality of an act may affect the consequences that are likely to flow from it. If active voluntary euthanasia is illegal, then doctors who practice it risk going to jail, which will cause them and their families to suffer, and also mean that they will no longer be able to help other patients. This can be a powerful reason for not practicing voluntary euthanasia when it is against the law, but if there is only a very small chance of the offense becoming known, then the weight of this consequentialist reason against breaking the law is reduced accordingly. Whether we have an ethical obligation to obey the law, and, if so, how much weight we should give it, is itself an issue for ethical argument.
Though ethics is independent of the law, in the sense just specified, laws are subject to evaluation from an ethical perspective. Many debates in bioethics focus on questions about what practices should be allowed – for example, should we allow research on stem cells taken from human embryos, sex selection, or cloning? – and committees set up to advise on the ethical, social, and legal aspects of these questions often recommend legislation to prohibit the activity in question, or to allow it to be practiced under some form of regulation. Discussing a question at the level of law and public policy, however, raises somewhat different considerations than a discussion of personal ethics, because the consequences of adopting a public policy generally have much wider ramifications than the consequences of a personal choice. That is why some healthcare professionals feel justified in assisting a terminally ill patient to die, while at the same time opposing the legalization of physician‐assisted suicide. Paradoxical as this position may appear – and it is certainly open to criticism – it is not straightforwardly inconsistent.
Many of the essays we have selected reflect the times in which they were written. Since bioethics often comments on developments in fast‐moving areas of medicine and the biological sciences, the factual content of articles in bioethics can become obsolete quite rapidly. In preparing this 4th edition, we have taken the opportunity to cover some new issues and to include some more recent writings. Part X, on Disability, is new, as are the section in Part VII on Academic Freedom and Research and the essays in Part IX on Doctors’ Duty to Treat. There are new articles in almost every other section as well, on gene editing, the morality of ending the lives of newborns, brain death, the eligibility of mentally ill patients for assisted dying and experiments on humans and on animals, and on public health.
Some authors of articles that have become dated in their facts have kindly updated them especially for this edition. An article may, however, be dated in its facts but make ethical points that are still valid, or worth considering, so we have not excluded older articles for this reason.
Other articles are dated in a different way. During the past few decades we have become more sensitive about the ways in which our language may exclude women, or reflect our prejudices regarding race or sexuality. We see no merit in trying to disguise past practices on such matters (although we have made minor changes to some of the older writings in this anthology, in order to bring the terminology used in line with contemporary usage), so we have not excluded otherwise valuable works in bioethics on these grounds. If they are jarring to the modern reader, that may be a salutary reminder of the extent to which we all are subject to the conventions and prejudices of our times.
Helga Kuhse was a co‐editor of the first three editions of this anthology. She has now retired from academic work, and so decided not to join us in co‐editing this edition. Nevertheless, her influence remains present, in the articles carried over from earlier editions. We thank her for helping to establish Bioethics: An Anthology as a comprehensive and widely used collection of the best articles in the field.
Katherine Carr did a stellar job as the copy‐editor of this volume. The number of errors she spotted in previously published peer‐reviewed (and presumably copy‐edited and proof‐read) journal articles is extraordinary.
Last, but not least, we thank two Graduate Students in the Queen’s University Department of Philosophy who assisted us in sourcing possible materials for inclusion in the 3rd edition of this text (Nikoo Najand) and in this current edition (Chris Zajner).