Читать книгу Arguments, Cognition, and Science - André C. R. Martins - Страница 7

Оглавление

Introduction

We reason. And we believe we are good at it. We feel our reasoning helps us get closer to good answers. At times, it does that. But we also make mistakes. And quite often, it seems we are unable to notice our mistakes. Too often, we find deep disagreements on matters of principle and on descriptions of how the world works. No matter how hard we try, sometimes we never reach consensus. Each side will tell everyone why their opponents are wrong. But the defenders of both positions almost never see their own mistakes. Nor do they acknowledge matters where the opposing side might be correct. In those situations, it seems our reasoning skills might be failing us. Or, as most people would see it, the reasoning skills of the defenders of the other side are failing them.

We try too hard to defend our points of view. Why we do that is a fascinating problem, but most of us pay too little attention to it. Why can’t we admit we might be wrong? What drives us to stick to ideas, beliefs, ideologies? The small group of us known as scientists are usually taught that science is a great tool to find correct answers. But disagreement is also common in scientific matters. When that happens, discussions can follow the same pattern of defense as in any other human activity. There seems to be mechanisms in science that help us to trade one point of view to another. But those mechanisms seem not to operate as fast as we would like. There is even an old notion that science advances because elder scientists retire, allowing new ideas to become prevalent.

The mechanisms of our cognition and when it fails have been studied for a while now. A more complete picture of our limitations and mistakes has been emerging recently. Meanwhile we have been working on normative tools. Those tools should be able to tell us how we should reason. They are based on logic, mathematics, and probability. They are actually quite complete, if one could forget the problem that makes them impossible to use. Still, we can learn a lot from ideal cases. At the very least, ideal cases can provide a direction in which we need to go if we want to get closer to their impossible results.

In this book, I will describe how we reach conclusions, and I will discuss how we might do that better than we do. The book is also about our mistakes and the tools we can use to try to avoid them. It describes current research on human reasoning and our logical tools. It shows what those recent results mean for our beliefs as well as how we make science. The main conclusion is that we have no known way to choose correct ideas about the real world. We can only rank them as more or less probable (when that is possible). Despite all our achievements, there is still a lot to understand about the consequences of not knowing which ideas are actually right. But it is not only a case of understanding those consequences. From a cognitive point of view, beliefs can often become quite problematic. From a logical point of view, beliefs have no grounds. They cannot be logically justified, and they harm our ability to reason. Those realizations have important consequences for scientific practice. And they help us better understand some epistemological questions about acquisition of knowledge and theory evaluation.

We used to think Earth was the middle of the universe. We used to think man was different from all other animals. We thought we made our decisions based on reason, but we have already learned that we are not at the center, we are not as special as we once thought. Instead, we live on a tiny planet, and many of our decisions are dominated by our emotions. But we still have our reasoning skills in high regard. Our achievements as a species seem to corroborate the idea that, when we do use reason, we are quite good at it. On the other hand, a fast review of psychology experiments on human cognition will reveal that picture has serious flaws. When we use our natural, untrained reasoning, we tend to commit very trivial mistakes. Even as our reason fails, we tend to be much more confident about our skills than we should be. Group reasoning sometimes changes this scenario for the better, but social influence can also lead to worse reasoning than we would get without it. Surprisingly, we have been learning our reasoning skills did not evolve to help us find correct answers. Their main purpose seems to be social. We make arguments to convince others. Failing that, we accept the arguments from others in our social group, and we doubt arguments from those who do not share our beliefs. Reasoning is about getting power, and it is about bonding. It was not meant for finding better answers—even if we can use it for that.

We have been aware of some of those problems for a long time, and we have been creating tools to avoid our mistakes at least for the last few millennia. Deductive logic, mathematical methods, experiments—they were all created and perfected to help us correct our mistakes. Or, more likely, to correct the mistakes of those who disagreed with us. With those tools, we have been learning how to get better answers about how the world is. More recently, we have created probabilistic inductive methods. Those do not provide answers to the question of which idea is correct, but probability inductive methods allow us to compare ideas and theories and try to estimate which ones are more probable.

In this book, I review some results about our cognition and the tools we created to understand how we can improve our reasoning. Our argumentation skills seem to work on a motivated fashion to defend our points of views. As a consequence, holding points of view that we assume to be true is counterproductive. Our failures make it clear we need formal methods of reasoning. We need logic and mathematics because they make it easier to show when an argument is wrong. That need explains the success of mathematics where it is used. That success seems surprising only because we compare it to our far more fallible nonmathematical arguments.

The realization that we cannot know any hypothesis about the real world to be right has serious consequences. It has deep impacts in the current crisis in statistical misuse and the problems with null-hypothesis tests. We are observing a serious lack of replicability of so many published results. Understanding the relation between that crisis and our desire to believe can help us make better choices on the tools we should use and which ones we should avoid. For similar reasons, the problem of demarcation between scientific and nonscientific ideas makes no sense. We can say an idea is so improbable it must be a bad description of the world, but no ideas should be labeled nonscientific. Bayesian methods are sometimes enough to separate the probable from the improbable ideas, but they depend not only on the main theory but also on auxiliary hypotheses. The existence of those auxiliary hypotheses and the role they play is central to answering some of the criticisms to the Bayesian point of view.

I will use the equivalence between Bayesian methods and a Solomonoff induction machine to understand a few questions in the Bayesian framework. While the framework is, from a practical point of view, impossible to use fully, we can still obtain approximations. We will also see that theoretical work acquires a new role. We need a constant influx of new ideas. The cultural relativists got part of the description of the scientific enterprise correct. But they missed some key aspects that are fundamental. Ideas might be equal at the beginning, but data comes and some of them become more probable. Underdeterminacy exists, but it does not mean we have no way to move forward.

The picture that emerges explains why the hard sciences have been so successful. It is interesting to see that physics has been able to make amazing advances despite the naïve epistemological point of view of most physicists. But that naïveté does carry problems to some current debates in physics. There are different typical mistakes in each area of knowledge. Luckily, once we understand the main issues in this book, we can see easily identifiable paths for improvements.

Arguments, Cognition, and Science

Подняться наверх