Читать книгу The War on Science - Shawn Lawrence Otto - Страница 14

Оглавление

Chapter 2

THE POLITICS OF SCIENCE

There is nothing which can better deserve your patronage than the promotion of Science and Literature. Knowledge is in every country the surest basis of public happiness. In one in which the measures of Government receive their impression so immediately from the sense of the Community as in ours it is proportionably [sic] essential.

—George Washington, January 8, 1790

How to Ruffle a Scientist’s Feathers

When speaking to scientists, there is one thing that will almost always raise their indignation, and that is the suggestion that science is political. Science, they will respond, has nothing to do with politics.

But is that true?

Let’s consider the relationship between knowledge and power. “Knowledge and power go hand in hand,” said Francis Bacon, “so that the way to increase in power is to increase in knowledge.”

At its core, science is a reliable method for creating knowledge, and thus power. To the extent that I have knowledge about the world, I can affect it, and that exercise of power is political. Because science pushes the boundaries of knowledge, it pushes us to constantly refine our ethics and morality to incorporate new knowledge, and that, too, is political. In these two realms—the socioeconomic and the moral-ethical-legal—science disrupts hierarchical power structures and vested interests (including those based on previous science) in a long drive to grant knowledge, and thus power, to the individual. That process is always and inherently political.

The politics of science is nothing new. Galileo, for example, committed a political act in 1610 when he wrote about his observations through a telescope. Jupiter had moons and Venus had phases, he wrote, which proved that Copernicus had been right in 1543: the celestial bodies did not all revolve around Earth. In fact, Earth revolved around the sun, not the other way around, as contemporary opinion—and the Roman Catholic Church—held. These were simple observations, immediately obvious to anyone who wanted to look through Galileo’s telescope.

But the statement of an observable fact is a political act that either supports or challenges the current power structure. Every time a scientist makes a factual assertion—Earth goes around the sun, there is such a thing as evolution, humans are causing climate change—it either supports or challenges somebody’s vested interests.

Consider Galileo’s 1633 indictment by the Roman Catholic Church, which was at the time the seat of global political and economic power:

The proposition that the sun is in the center of the world and immovable from its place is absurd, philosophically false, and formally heretical; because it is expressly contrary to Holy Scriptures.

The proposition that the earth is not the center of the world, nor immovable, but that it moves, and also with a diurnal action, is also absurd, philosophically false, and, theologically considered, at least erroneous in faith.

Therefore . . . invoking the most holy name of our Lord Jesus Christ and of His Most Glorious Mother Mary, We pronounce this Our final sentence: We pronounce, judge, and declare, that you, the said Galileo . . . have rendered yourself vehemently suspected by this Holy Office of heresy, that is, of having believed and held the doctrine (which is false and contrary to the Holy and Divine Scriptures) that the sun is the center of the world, and that it does not move from east to west, and that the earth does move, and is not the center of the world; also, that an opinion can be held and supported as probable, after it has been declared and finally decreed contrary to the Holy Scripture.

Why did the church go to such lengths to deal with Galileo? For the same reasons we fight political battles over issues like climate disruption today: facts and observations are inherently powerful, and that power means they are political. Failing to acknowledge this leaves both science and citizens vulnerable to attack by antiscience propaganda—propaganda that has come to infiltrate politics and much news media coverage and educational curricula in the early twenty-first century. The war on science has steered modern democracy away from the vision held by its founders, and is threatening its survival.

Wishing to sidestep the painful moral and ethical parsing that their discoveries sometimes compel, scientists for the last two generations saw their role as the creators of knowledge and believed they should leave the moral, ethical, and political implications to others to sort out. But the practice of science itself cannot possibly be apolitical, because it takes nothing on faith. The very essence of the scientific process is to question long-held assumptions about the nature of the universe, to dream up experiments that test those questions, and, based on the resulting observations, to incrementally build knowledge that is independent of our beliefs, assumptions, and identities, and independently verifiable no matter who does the measuring—in other words, that is objective. A scientifically testable claim is transparent and can be shown to be either most probably true, or to be false, whether the claim is made by a king, a president, a prime minister, a pope, or a common citizen. Because it takes nothing on faith, science is inherently antiauthoritarian, and a great equalizer of political power. That is why it is under attack.

Who Defines Your Reality?

The scientific revolution has proven to be more beneficial to humanity than anything previously developed. By painstakingly building objective knowledge about the way things really are in nature instead of how we would wish them to be, we have been able to double our life spans and boost the productivity of our farms by thirty-five times. With careful observation, recording, testing, and replication, we have been able to give children to those who were “barren” and the fertile the freedom to decide when—and whether—to reproduce, freeing women by providing choice. Science has released us from a life that was, according to Thomas Hobbes, “a war . . . of every man against every man . . . solitary, poor, nasty, brutish, and short.”

In Hobbes’s era, economics was a zero-sum game: “Without a common power to keep them all in awe,” he wrote, men fell into war. There was finite wealth and opportunity, and to get ahead I had to take some of it away from you. In its capacity to create knowledge, science broke that zero-sum economic model and generated wealth, health, freedom, and power beyond Hobbes’s wildest dreams. It produced tremendous insights into our place in the cosmos, into the inner workings of our own bodies, and into our capacity as human beings to exercise our highest aspirations of love, hope, creativity, courage, and charity.

Each step forward has come at the price of a political battle. Also, too often, they have come at the cost of the environment. As we continue to refine our knowledge of the way nature really is, independent of our beliefs, perceptions, identities, and wishes for it, we must also refine our ethics and morality, assuming more responsibility for our choices. Inevitably, this is uncomfortable, because the process throws many reassuring notions into conflict with our new knowledge—notions that are often our most deeply rooted, ancient, and awestruck explanations about the primacy of our clan, the wonders of creation, the specialness of our identities, and the possibility of life after death.

The Power of the Scientific Method

How do we create knowledge? There is no one “scientific method”; rather, there is a collection of strategies that have proven effective in answering our questions about how things in nature really work, as opposed to how they at first appeared to work to our common senses, or to scientists or theologians with less precise measuring tools than the ones we now have. How do plants grow? What is stuff made of? How do viruses work? Why are montane voles promiscuous sex fiends while prairie voles are loyal lifelong mates? The process usually begins with a question about something, and that suggests a strategy for making and recording observations and measurements. If we want to learn how plants grow, for example, we begin by looking at plants, not rocks.

These initial recorded observations suggest a hypothesis: a possible explanation for the observations that partially or fully answers the initial question. This hypothesis must make a risky prediction, one that, if true, might confirm our conclusion or, if false, will destroy it. If there’s no possible way to prove the hypothesis is false, then we aren’t really doing science. Saying plants grow because God wills it is a statement of faith rather than a statement of science because (a) it’s not limited to the natural world, and (b) it cannot be disproved. Therefore it can’t be tested and so it can’t produce any real knowledge. An article of faith is an assertion. A statement of science can be tested by observing nature to see if it’s likely true or not. Nature is the judge.

After we set out our hypothesis, we design and conduct experiments that test the hypothesis and try to disprove it. If we can’t disprove it, we conclude that it may be true and write a paper detailing what we did and concluded, and outlining ways the conclusion could be tested further. We send it to a professional journal, which sends it to others who have knowledge of the field (peer reviewers) to see if they can tear any holes in it. Was our method sound? Did we make any mistakes? Did we control all the possible influences on the outcome? Are there other explanations we didn’t think of? Was our math right?

If these peer reviewers discover any holes in our logic or methodology, they send it back for more work. But if they conclude that it is solid and transparent enough to stake their reputations on, they recommend our paper for publication. Once it is published the process is not over. Others who read it may then set out to disprove it. If they can, their stars rise and ours fall proportionately. But if they confirm what we found, the conclusion becomes a little more reliable. In this way, we slowly, meticulously create knowledge that is objective: it is independent of our identities, and replicable by anyone.

The method is fallible, since our senses and our logical processes are easily influenced by our assumptions and wishes, and so they often mislead us. But over time the method tends to catch those errors and correct them via peer review and replication. Thus, bit by careful, painstaking bit, we build a literature of what we know, as distinct from our beliefs and our opinions—and as we do, we gain power.

How Old is Earth?

One example of knowledge, as opposed to belief or opinion, is the age of Earth. Geological measurements show, over and over, no matter who does the measuring, that Earth is about 4.54 billion years old. This is something one can learn to measure for oneself. It’s called radiometric dating, and it’s pretty simple. Radioactive uranium isotopes decay at known, measurable rates into stable (nonradioactive) lead isotopes, and radioactive potassium isotopes decay at known, measurable rates into stable argon isotopes. By using a mass spectrometer one can buy on eBay for about $2,000, one can count how many atoms of a particular uranium isotope are left in a rock and how many atoms of its “daughter,” or decayed isotope of lead, there are. One can do the same thing with potassium and argon. Doing some simple math lets one then figure out how old that rock is.

Because the whole solar system is thought to have formed at the same time, we can also look at the ages of meteorites and of rocks astronauts brought back from the moon to get a pretty complete picture of Earth’s history, and its age consistently comes up to be about 4.54 billion years old. This isn’t something we believe, it’s something we measure, like the distance between Minneapolis and Dallas. Claiming that Earth is just six thousand years old is mathematically akin to saying, “No, the distance between Minneapolis and Dallas is not around 942 miles. It’s about six and a half feet.”

An Old Book or Your Own Eyes

The measurements we’ve learned how to do using science sometimes conflict with translations of ancient statements in the Bible that, if taken literally, date Earth’s age at around six thousand years. These statements were made before we knew how to measure such things as the age of rocks, or what an atom was, much less how to count them with a mass spectrometer. A reasonable person would note this and ask: Is it better to base our knowledge on reading an old book, or on observations of nature before our very eyes? It’s a question as old as the scientific revolution. Galileo ran into it often after he began lecturing about what he has seen through his telescope. In 1610, he wrote his friend, the German mathematician Johannes Kepler, “My dear Kepler, what would you say of the learned here, who, replete with the pertinacity of the asp, have steadfastly refused to cast a glance through the telescope? What shall we make of this? Shall we laugh, or shall we cry?”

In 1632, in his book Dialogue Considering the Two Chief World Systems, Galileo recounted a tale to a learned man, a natural philosopher, visiting the home of a Venetian doctor who invited a group in to watch and learn as he dissected a human cadaver. The anatomist knew the philosopher believed, as he had read in an old book, that the nerves originated in the heart. Here’s how Galileo told the story:

The anatomist showed that the great trunk of nerves, leaving the brain and passing through the nape, extended on down the spine and then branched out through the whole body, and that only a single strand as fine as a thread arrived at the heart. Turning to a gentleman whom he knew to be a Peripatetic philosopher, and on whose account he had been exhibiting and demonstrating everything with unusual care, he asked this man whether he was at last satisfied and convinced that the nerves originated in the brain and not in the heart. The philosopher, after considering for a while, answered: “You have made me see this matter so plainly and palpably that if Aristotle’s text were not contrary to it, stating clearly that the nerves originate in the heart, I should be forced to admit it to be true.

If we choose the careful, repeatable science of observation and measurement tied back to nature over the estimates more roughly crafted from the creation stories in the Bible or other old and translated texts, must we reject the rest of religion? Or does it still have value in leading a moral life? But then others ask: Is religion even required for morality? If it is inaccurate, can we take any of it seriously? One can see how easily new knowledge can throw world-views into conflict.

When Does Life Begin?

Another example of the thorny intersection of science with traditional ideas, law, and politics comes from the biosciences. Careful, reproducible observations and measurements have forced us to repeatedly refine our ideas about what life is and when it begins. Is a human being first a life when it emerges from the birth canal? Does it have any legal rights as a person before then? Or is it a life when it is able to survive independently outside of the womb even if it is removed early, as can happen naturally with premature birth or with a Caesarean section? Or is it perhaps a life at quickening (the moment a mother first feels a fetus move, at about four months), as was the legal standard for a life when America was formed? But wait! Perhaps it is really a life when a fertilized egg first implants in the uterine lining, which, based on observations, is the medical definition of when a pregnancy begins. A woman cannot be said to be pregnant until her body begins the chemical and biological changes that accompany a symbiotic hosting of the embryo, can she? If it does not implant, the egg, even if fertilized, is simply flushed. Here we get into a tricky area, because many religious conservatives say, “No, it is a life when egg and sperm meet,” whether or not the fertilized egg ever implants.

But then, a scientist would ask the fundamentalist, is it still a life at the moment of fertilization, even if we know from careful observation that one-third to one-half of fertilized eggs never implant, and as many as three-quarters fail to lead to an ongoing pregnancy? And, of course, that brings up more questions: What are fertilized eggs that never implant? How should we define them, if life occurs at fertilization? As miscarriages? Abortions? Nonpregnancies? Suicides? Murders? Something else? What implications might that definition have—legally, ethically, morally—for the use of birth-control pills that inhibit implantation? Is that abortion, murder, or pregnancy prevention?

As our careful observations of life continue, so does our power both to assist and prevent pregnancy. But as our skills improve, new, more troubling questions form. What if we remove the uterus from the process entirely? Is it a life when sperm and egg are joined in a test tube at a fertility clinic and allowed to divide into a group of, say, sixteen cells that are then frozen for future implantation in a woman desperate to have children? Can the woman be said to be “pregnant” as long as this microscopic clump of frozen cells exists? What does Burwell v. Hobby Lobby Stores say about that? What, if any, rights should these frozen cells possess? And is a child conceived in this way—a “test-tube baby,” as we once called them—without a soul, as was suggested by some religious conservatives in the 1970s? Once born, are the joy they bring and the contributions they make less valuable? If we make a special exception for them, by agreeing that in vitro fertilization is not interfering with God’s plan, or by acknowledging that they do appear to have souls, why? On what basis? And what does that make the dozens of frozen cells we discard after a successful pregnancy?

While we’re pondering these linguistic, legal, and ethical quandaries, our observations lead us to yet another new understanding. We don’t need sperm to fertilize an egg; we can do it with the nucleus of another cell from the same being. We try this, and sure enough, we find we can create many identical genetic copies of a sheep or mouse. We call them clones. But then we have to ask: Is it a life if it is just an ovum that has had its nucleus removed and replaced by the nucleus of another cell, and has then been chemically or electrically shocked to induce the natural process of cell division, without fertilization by sperm? If egg and sperm have never met, is it a life? Or is that creature—possibly, one day, a human—damned or soulless as it was once argued “test tube babies” would be?

Observations tell us that beings produced in nontraditional ways seem to be the same as any other creatures. We have to ask, then, is every one of the roughly 1.5 million eggs a woman has in her ovaries at birth a life with rights? When, exactly, does life begin? Is it true, as the comedy troupe Monty Python sang in The Meaning of Life, that “every sperm is sacred”?

What happens if we transform adult skin cells into stem cells, and those into sperm and egg, and then fertilize one with the other? Is that a clone or something else? What if we take the troublesome term “fertilization” out of the picture? Is it a life if we design its genome on a computer (as scientists at the J. Craig Venter Institute have done), buy a high-quality DNA synthesizer on eBay for $8,000 or so, use it to make fragments of the genome we designed, chemically stitch the fragments together, inject the complete genome into a cell with an empty nucleus, and shock it into replicating? Here, we have made a living, reproducing thing starting with a computer design and a few common chemicals. What does that mean for our ideas about life and our definition about conception? Is it wrong to be doing this? To be asking these questions? Applying these observations? Gaining these powers?

What is life? Is life an unbroken chain of genetic code, running down through the generations, endlessly recombining in new forms? Is it software? Does the software beget the hardware? When does it become an individual with rights? Where do we draw the legal line? The moral line? Can we draw a line at all? Is that the right way to be thinking about it? And if we do, how do we define the terms conception, fertilization, implantation, and pregnancy?

In each of the above cases, new knowledge was gained by applying the scientific method of making careful observations and measurements of nature and recording the data, then testing and drawing conclusions based on the results instead of on assumptions or beliefs, and then publishing those conclusions about how things really appear to be in nature for others to review and attempt to disprove if they can. The knowledge gained through this incredible process has given us new power over the physical world, but it also forces us to reevaluate our intuitive assumptions, and to refine—and, in some cases, redefine—the meanings of words and values we thought we understood when we didn’t know what was really going on.

This power and these new definitions have moral, ethical, and legal implications for how we conduct our lives, and this is where science, democracy, and our legal system can come into conflict. As our knowledge becomes more refined and precise, so too must our social contract, and this process is disruptive to moral, ethical, economic, and political authority based on prior definitions and understandings. Science itself is inherently political, and inherently antiauthoritarian.

Antiauthoritarian Politics

Because this is the case, it’s reasonable to ask how science fits into political thought. As science writer Timothy Ferris pointed out, in politics there are not just two forces, the progressive left (encouraging change) and the conservative right (encouraging retention). In fact, there are four. Imagined on a vertical axis, there are also the authoritarian (totalitarian, closed, and controlling, at the bottom of the axis) and the antiauthoritarian (liberal, open, and freedom-loving, at the top), which one can argue have actually played much more fundamental roles in human history. Politics, then, can be more accurately thought of as a box with four quadrants rather than as a linear continuum from left to right. Any one of the infinite gradations of political thought can be placed on the plane around these axes.


When looked at in historical perspective, it’s clear that while science and republican democracy are antiauthoritarian systems of knowledge and of governance, respectively, they are neither progressive nor conservative, but are both. Both communism on the left and fascism on the right are authoritarian and opposed to the freedom of inquiry and expression that characterize science and democracy, just as fundamentalist and authoritarian religions are.

Alternatively, left-leaning progressives and right-leaning conservatives can find common cause in the antiauthoritarian principles of freedom of inquiry and expression, universal education, and individual human rights that go hand in hand with the liberal (meaning “free”) thinking that informs science and democracy. The life of conservative writer David Horowitz, who was a part of the radical US new-left movement in the late 1960s but is now on the radical right, provides an example of how one can move 180 degrees ideologically from left to right, but still maintain the same general level of liberal antiauthoritarianism vertically.

Democracy: An Endangered Species?

The challenge to authority that science presents is one of many reasons why it has flourished in free, democratic societies, and why those same societies have fallen when they have turned their backs on the freedom science requires in favor of authoritarianism. Nazi Germany is an excellent example. In the 1930s, Berlin was the world pinnacle of science, art, and culture. As we will see later, it was the power of new technology created by that liberal culture that allowed Hitler to come to power, and it was a cultural turn away from freedom that then led to the fleeing of Germany’s scientists and artists, and its eventual downfall. It is not a coincidence that the ongoing scientific revolution has been led in significant part by the United States and other free, democratic societies. But it is also partly why, since the late twentieth century, the political climate has increasingly hampered US policymakers and those in other leading democracies in dealing with so many critical science policy issues, and why, by turning away from it, the United States may soon cede both its leadership in scientific research and development and the economic, social, and cultural influence that leadership provides.

Without a well-informed voter, the very exercise of democracy becomes removed from the problems it is charged with solving. The more complex the world becomes, the more challenging it is for democracy to function, because it places an increased burden of education and information upon the people—and in the twenty-first century, that includes science education and science reporting. Without the mooring provided by the well-informed opinion of the people, governments may become paralyzed or, worse, corrupted by powerful interests seeking to oppress and enslave.

For this reason and others, Jefferson was a staunch advocate of free public education and freedom of the press, the primary purposes of which were to ensure an educated and well-informed people. In 1787, he wrote to James Madison,

And say, finally, whether peace is best preserved by giving energy to the government, or information to the people. This last is the most certain, and the most legitimate engine of government. Educate and inform the whole mass of the people. Enable them to see that it is their interest to preserve peace and order, and they will preserve them. And it requires no very high degree of education to convince them of this. They are the only sure reliance for the preservation of our liberty.

But what do we do when the level of complexity actually does require a “very high degree of education”? Can democracy still function effectively?

The War on Science

Подняться наверх