Читать книгу Arguments, Cognition, and Science - André C. R. Martins - Страница 10

Оглавление

Chapter 3

Groups and Ideas

There is one more aspect of cognition we need to discuss. A huge part of what we believe we know, our opinions and preferences, comes from the influence of others. Social influence and social cognition can have an important impact on what we do and claim to know as a species. Individual ants, for example, behave following very simple rules, but their colonies can make sophisticated decisions. It is reasonable to expect we should also be able to do more as groups than as individuals. That leads to two questions: How do we influence each other? How do our opinions combine into social effects?

More than a century ago, Francis Galton visited the West of England Fat Stock and Poultry Exhibition. There he observed a competition where the contestants had to guess the weight of a fat ox. Galton was surprised when he realized that, by combining the estimates of every participant, he got a median estimate (1,207 pounds) that was very close to the actual figure (1,198 pounds; Galton 1907). The average value of the guesses was even closer, at 1,197 pounds. That suggested groups might be much better at reasoning than individuals.

The observation that group cognition can sometimes outperform individual reasoning is called the “wisdom of the crowds” (Surowiecki 2005). It has been partially confirmed by several other studies. The evidence that groups can, most of the time, reason better than the average individual seems solid (Hill 1982), but group reasoning does not fare so well against the reasoning of the most competent member of the group. Neither does it do well against statistically pooled responses or mathematical models.

On the matter of biases, it seems there is no clear winner (Kerr and Tindale 2004). Overconfidence may diminish when we use groups (Sniezek and Henry 1989), but the details on how the group is allowed to interact can have an important impact on the correctness of the group estimates. Some questions can elicit deep emotional answers. When those reactions are brought to the group, interaction between members can become very detrimental to accuracy. The desire to agree (or not disagree) with the group seems to be important. That desire can undermine our critical thinking analysis. Janis called that effect groupthink (Janis 1972). We can observe this kind of phenomena in many circumstances where group belonging is important. Examples include political and religious discussions, as well as communities of sports fans. When groupthink happens, the group might reason worse than its best members would. In other words, the group can have a much worse performance than the average individual, if left alone.

In a series of experiments, Lorenz et al. (2011) explored how much social influence could damage group wisdom. They asked people factual questions. When there was no interaction between the individuals, the group did show improved answers. Their results confirmed the wisdom-of-the-crowd effect. After that initial round, the researchers gave information about others’ answers to their subjects and asked who would like to change their initial answers. What they observed was a diminishing in the diversity of answers. They even observed cases where the correct answer would then lie outside the new interval of the answers the subjects provided. Despite the decrease in the quality of reasoning, confidence seemed to increase.

There are, of course, ways to diminish the problem. Evidence from collaborative writing of articles on the website Wikipedia suggest that keeping the team as diverse as possible can improve the quality of the entries (Lerner and Lomi 2018). Indeed, models inspired by the data about the interaction of editors suggest the outcome might be of lower quality if one blocks the contribution of people with more extreme, potentially problematic points of view (Rudas et al. 2017). Preserving the diversity of opinions seem to matter in order to get better results.

That diversity, however, might be easily destroyed by trivial social interaction. Solomon Asch (1955, 1956) performed a series of now classical experiments about social influence. His experiments highlight how well that influence can destroy the expected ability of groups to reason better. He asked his volunteers to look at a version of the picture in Figure 3.1.

The question he asked the volunteers was this: Which of the three lines at the right panel, A, B, or C, has the same length as the line in the left panel? The question was asked to two groups. In one, the control situation, there was no social influence. In that case, 99 percent of the subjects answered correctly; it was line C. Other volunteers in the treatment group were subject to social influence. Before answering, they listened to others claiming A was the correct choice. Those people who provided the wrong answer were actors. They were not being tested; they were there to see if they could influence people to answer wrongly—and they did that. When there was a minimum of three people answering A before the actual volunteers answered, those volunteers tended to imitate the answers of others, and they picked the wrong choice up to 75 percent of the time.

From that study alone, it is not clear why people made so many mistakes. It is possible that they still thought C was the correct answer but said A to fit in the group. They might have changed their perception to match what everyone else was saying or perhaps a little of both. Thanks to functional magnetic resonance imaging (fMRI) scans, recent experiments have given us more clues as to what might be going on. Eisenberger et al. (2003) observed that when we experience rejection, our brain activity is similar to that of real physical pain. And Klucharev et al. (2009) noticed that our tendency to conform to the opinion of a group happened with the use of the learning mechanisms of our brain. That suggests, though not conclusively, that at least to some extent, we do change our opinions.


Figure 3.1 Representation of the figure shown by Asch to the subjects of his experiment.

Influence toward a wrong answer could have bad consequences, even deadly consequences, as we are witnessing with the comeback of fatal diseases brought on by anti-vaxxer campaigns. That makes it important to figure out when groupthink is more likely to happen. Experiments do show that not all influences work equally. Surprisingly, the level of expertise of the source of information might have little effect on how much she can influence others. That was suggested by experiments performed by Domınguez et al. (2016). They observed that brain activity did not increase when influencers were experts, but stronger activation of the brain was observed when we had a history of agreeing more with the source of information. People we tended to agree with more frequently caused more effects in our brain than those with whom we disagreed more often. We seem to care less about disagreements with people we did not agree with much in the first place.

As an example, De Polavieja and Madirolas (2014) also studied ways to get better estimates in social contexts. Their study suggested it might be better to consider only the opinions of very confident people, that is, to get average estimates only from those who did not change their minds under social influence. Their suggestion was an attempt to recover the initial range of opinions. On the other hand, that strategy might cause us to only pay attention to the most extreme opinions.

The composition of group also seems to be an important factor, as we have seen in the Wikipedia problem. Answers are also influenced by framing effects—they often depend on how the question is asked or framed. While studying those effects, Ilan Yaniv observed that framing problems also tended to be smaller when the group was more diverse (Yaniv 2011). Homogeneous groups, on the other hand, were too susceptible to influence. They actually performed worse than the individuals in the group.

In many of those studies, social influence happened between equals. There were often no positions of power nor any known expertise among the volunteers. In real life, however, we often find ourselves in groups where a hierarchy exists. Positions of power can add a new characteristic, if we want to understand how groups reason.

Influence under a hierarchy was the subject of a very famous (and also infamous) experiment by Stanley Milgram (1963). Milgram was interested in understanding how it was possible Nazism could have dominated Germany. Most Germans were not murderous psychopaths. To answer that question, he prepared an experiment where one scientist was in the room controlling the situation. Two other people, the ones that were supposedly being tested, were assigned to specific roles. The first one was tied to a chair. That chair was connected to a machine that could be turned on to administer electric shocks to the sitting person. The task of the second individual was to switch the button that caused the shock, when told to do so.

The first subject was asked questions. When the subject answered those questions correctly, nothing happened. However, each incorrect answer was to be punished with a shock, starting at the small voltage of 15V. Each error made the shock 15V stronger than the previous one, up to a final shock of 450V. The researchers described the experiment to the subjects as being about the effects of nervousness on the accuracy of individuals to answer questions, but that was not the actual setup. Unknown to the person who inflicted the shocks, no real shock was being applied. The person answering questions was an actor instructed to act as if the shock were real. The actor would get some answers right and some wrong, showing some discomfort at first. Eventually, the actor would beg for the experiment to stop. He showed very clear signs of distress and pain, but the scientist would instruct the second subject to continue on with the shocks, despite those pleas.

Many among the tested people showed clear signs of extreme stress while hearing the cries of pain from the actor. Despite that, Milgram reported that 65 percent of them kept obeying the scientist up to the largest voltage. That experiment had several problems. It can be criticized in many ways, including the serious ethical problem of the horrible psychological pain it caused to the people who kept pressing the button. It also seems there were a number of problems about how well the experiment script was followed (Perry 2013). The 65 percent figure is probably an inflated estimate. But the experiment still showed how a trusted authority figure can make people act even against their best judgment. Milgram was not looking at possible changes of opinion. The actions he observed were not what we would expect from normal, thinking human beings.

Group cognition can be an important asset. Evidence suggests one way to make group cognition better is to diminish the influence between the individuals, but that is not always possible. We often have to make decisions on subjects outside our expertise. To do that properly, we need help of others. At the very least, we need information provided by others. Nowadays, that is not hard to find. We live in a time when information is easily available in overwhelming quantity. So, another important question is how well we make use of that easy access.

Once again, we seem to use reasonable strategies; and, once again, while those strategies make sense, they also carry bad consequences. Given the amount of available information, we need to have ways to separate what is relevant and what is trustworthy from piles of garbage. Identifying reliable sources becomes fundamental, but if we are not experts in a field of knowledge, we might just not have the skills to estimate who is reliable. If every possible expert agreed, it would make sense to listen to them. Too often, however, there is someone who disagrees and we are left with the problem of who is right. The disagreeing person might not even be an expert, maybe she just claims she is, but we might not know how to identify expertise.

Under those circumstances, we need to guess how reliable our sources are when the only thing we have is our own opinions. We may start with no opinion on the matter, but as we collect information, some opinion starts taking shape. Either from the beginning or from some later point, we tend to find ourselves favoring one side of any debate. Since one point of view seems more likely true to us, we estimate that people who agree with us know better. They sound more trustworthy.

That is a well-documented bias called confirmation bias (Nickerson 1998). When looking for data or arguments, we look for those pieces of information that agree with our views; we do not look for disconfirming evidence. The problem that behavior can cause should be evident. If we are lucky enough to start with a correct opinion, good. We will reinforce it and nothing too bad happens. But if we are not that lucky, we will only reinforce our erroneous point of view. As we avoid opinions that could show us we are wrong, confirmation bias can compromise our ability to learn and correct ourselves. It also seems confirmation bias might have an important effect on our overconfidence. By looking for reasons we are right, we only make ourselves more confident; we do not improve the quality of our estimates.

Koriat and collaborators performed a series of experiments on the relationship between overconfidence and our tendency to only consider one idea. They posed questions to their volunteers and measured both confidence and accuracy (Koriat et al. 1980). The volunteers’ overconfidence tended to diminish when they received specific instructions to look for reasons why their answers might be wrong. The same was not true when the volunteers were told to give reasons why they might be right or when the instructions were to give reasons both in favor and against their answer. In this last case, people seemed to only look for weak negative reasons that they could easily counter-argue. They performed the task, but in a way that made their favorite answer look good.

Confirmation bias does not happen only when we look for sources of information, though. It, or something similar, can also be found in the ways we reason. Taber and Lodge observed that when we receive arguments about a political issue, we do not treat them equally (Taber and Lodge 2006). When an argument supports our political views, we accept it at face value, but when we hear arguments that are in conflict with those views, we show proper skepticism and we analyze their merit. We look for reasons that would show why those arguments might be wrong, something we do not do when we agree with the conclusions.

Indeed, it seems we even fail at tasks we are capable of performing, if that will help support our beliefs. Dan Kahan and collaborators (2013) have reported results that show that mathematically educated people make serious errors when analyzing data that conflicted with their personal opinions. In a control scenario, if the same data was about a neutral problem, people with better numeracy skills performed well at interpreting the data. However, when the problem was a very controversial issue (e.g., gun control), people with better numeracy interpreted the data in ways that agreed with their initial points of view. That happened regardless of the real data. People with improved numeracy would become even more polarized on the subject than people less well trained in mathematics. That suggests that being smarter can mean a stronger ability to distort reality to conform to one’s own point of view. That might not be conscious, but, at least in those observations, being smarter didn’t seem to help at doing a better job. Indeed, in a more recent study, Kahan and others (2017) have observed that polarization over politically charged issues such as gun control does not decrease with better numerical skills; quite the opposite. After looking at the same data, those more capable of analyzing data correctly, at least in principle, showed a stronger tendency toward more polarized opinions.

Indeed, Stanovich and collaborators (2013) have made similar observations about what they called myside bias, our tendency to evaluate data in a biased way that supports our opinions. The size of the bias their subjects showed was not correlated to their intelligence. It should be no surprise by now to learn that Rollwage and others (2018) observed that the more radical position people in a theme, the less capable those people seem to be to estimate the accuracy of their own judgments. Anyone who has ever observed people with strong political opinions debating should find that conclusion quite easy to accept.

The Reasons of Our Reason

Being too confident and ignoring the actual chances we might be wrong seem to serve no practical purpose. Neither does allowing ourselves to be convinced by opinions that are clearly wrong. Those characteristics do not seem to be good heuristics; they interfere in our ability to find good answers. Is it possible that, despite those problems, we might gain something from those cognition errors?

Hugo Mercier and Dan Sperber (Mercier and Sperber 2011) have proposed an explanation to that question. We have always assumed our mental and verbal skills have evolved for the pursuit of truth. We can use them for that, after all. It is possible, however, that they have evolved for other purposes. Since we are social beings, Mercier and Sperber observed that this characteristic may have shaped how we reason. After all, we evolved in an environment where, if others believed what you said, you would have more power. That meant better chances at surviving. There must have been strong pressure to be able to argue well and convince people. Convincing could be an advantage regardless of the correctness of the reasoning or the conclusion. Having followers and believers can provide major advantages. Therefore, Mercer and Sperber proposed their Argumentative Theory of Reasoning (ATR). ATR states that our reasoning exists to make us competent at debating and convincing others, and that is often not the same as arriving at the right answer.

The idea that we reason to make convincing arguments (that might turn out to be true or not) also seems to be applicable for children (Mercier 2011b) as well as other cultures (Mercier 2011a). We may still use our intellects to pursue correct answers to the problems we face. Finding better answers might also have contributed to shaping our reasoning to some degree. The evidence that a good part of our argumentation skill evolved to allow us to win arguments is, however, quite compelling.

That idea provides an explanation for our overconfidence. Being confident of what we say is a better way to convince others than showing doubt. If we want to convince instead of being right, looking for ways to defend our points of view is a more effective strategy. Confirmation bias now makes sense. We do not need to find counterarguments for our opinions if we will not use them. Unless, of course, we are anticipating a debate and looking for ways to answer those counterarguments. Our so far unexplained biases start making sense if we do not reason for truth but for social reasons (Mercier and Sperber 2017).

The strategies we use for convincing others might change from one individual to the next. Strategies that might be efficient at changing the mind of one person might fail with someone else. It might seem that different political orientations could be associated to differences in our brains. Political conservatives seem to be more structured in their decision making. They might have a greater need for order and closure. Meanwhile, liberals seem to tolerate ambiguity and new experiences better (Jost et al. 2003). Those observations seem to be associated with differences in the structure of their brains (Amodio et al. 2007). Quite interestingly, the same differences are already noticeable in the brain structure of young adults. Being a liberal seems to correspond to more gray matter volume of anterior cingulate cortex, and conservatives would have an increased right amygdala size (Kanai et al. 2011). Right now, it is not clear if those differences cause the political orientation, but the functions of those brain regions suggest that might be the case. The amygdala, for example, is responsible to fear regulation. A larger amygdala might suggest someone is more responsive to fear. On the other hand, the anterior cingulate cortex is involved in monitoring uncertainty. In this case, a larger region might, in principle, allow a larger tolerance to uncertainty. On the other hand, recent experiments suggest that both groups might have no difference in the way they respond to threats (Bakker et al. 2019).

Those observations do not mean, however, that one group reasons better or worse than the other. They might only represent a difference in preferences—and that, obviously, exists between both groups. Still, the question of whether one group would reason in a more competent way did get raised in the literature. Dan Kahan tested that.

While ideology influences which opinions we will trust, people with different ideologies seem to be influenced in the same way by new information (Kahan 2013). In his experiments, Kahan compared North American conservatives and liberals. Both sides showed the same tendency to fit reports about empirical evidence to their ideological positions. Both sides distorted the meaning of the evidence to support their own views.

That distortion, we know now, is not a sign of stupidity—quite the opposite. Kahan also tested the cognitive abilities of the subjects, which showed a reversal of what would be expected if lack of intelligence was the cause of this alignment and distortion. Those who scored highest in his cognition test “were the most likely to display ideologically motivated cognition.” That led Kahan to propose his Expressive Rationality Thesis (ERT). ERT claims people process information in ways that promote their individual ends. It is a similar idea to the Argumentative Theory of Reasoning. Both ERT and ATR state that we do not naturally use our reasoning or argumentative skills to get closer to the truth. Instead, we use our brains and language to establish our identities, to convince our allies, or to agree with their positions.

Those seem to be the main cause of our mental skills. When deciding which expert was more reliable, people agreed more with experts who, taking into account their clothes, presence, type of beard, and so on, looked like someone who would share their points of views (Kahan 2010). We unconsciously manipulate information and discourse to advance our ideological positions. Those are, indeed, positions that are defined and help to define to which group we belong. We do not treat information that agrees with our opinions the same way we treat information that disagrees from us (Taber and Lodge 2006), and that happens not only on a conscious level, as a strategy, but also subconsciously. The same pattern is also observed when we make unconscious estimates (Gilead et al. 2018).

People might treat their beliefs as if they were valued possessions, things they want to protect (Abelson and Prentice 1989). We might want to defend our beliefs, regardless of whether they are correct or not. In some circumstances, correctness might not be an issue. We might just be talking about preferences, where there might be no right or wrong. But serious debate could benefit if each side was capable of, at least, understanding the arguments of the opposing view. That is often not the case. When we have strong views, for example, on issues like abortion (Luker 1985) or politics (Sears and Whitney 1973), many of us seem to be incapable of even considering ideas that are opposite to our beliefs.

Consistency sounds like a good characteristic. Consistent people are considered reliable. If I start changing my expressed opinions too often, people will likely think I am either crazy or not very smart. Experimental evidence does show, as we have seen, that smart people are better at defending their points of view. That does not mean, unfortunately, that consistent people are more likely to be right. More intelligence could, in principle, help us make an impartial and solid analysis of available data. If that were the case, a better cognitive ability should make it more probable our reasoning would get us closer to truth. It seems, however, that we stick to our initial ideas, and we use our mental skills to defend them. When we do that, we are not moving away from any initial concepts we might have. In that case, our chances of being right might be determined only by luck, not our competence.

If we understand consistency in its philosophical meaning, as the property that I should not hold contradictory beliefs, that is a very desirable property. If you were to claim that, at the same time, a certain cat inside a box is alive and, at the same time, not alive, that would be seen, at best, as a paradox. If you are a respected scientist making that claim, people might actually take it seriously, maybe even too seriously, even if what you meant was not that this would actually happen, but that a certain interpretation of quantum mechanics had to be wrong because it led to that conclusion. Still, in most situations, you would be justifiably dismissed as a lunatic. You can still make claims of uncertainty, of course. Saying that the cat might be alive and it might be dead is not inconsistent.

We may also understand consistency as commitment. In this case, commitment means being willing to support a cause, to maintain your ideas even in the face of contrary evidence. Even in this case, commitment seems to have a good reputation among us, but that reputation is far better than it should be. Being committed to an idea when evidence points elsewhere is a rather bad choice. That would prevent learning; it would prevent advance.

There are some circumstances where commitment to an idea might still be acceptable. For example, if you are committed to a value; let’s say, something like saving lives. Saying saving lives is a good thing might not be a description of the world. It is a choice about what we think good is. If there is a moral we must follow, it seems saving lives would be included. If we hold life as a value, we can change our minds about the best strategies to achieve that goal. And, of course, we might want to inspect our other values when making any decisions. There might be inconsistencies there (of the bad, illogical type), as well as problems to achieve all our goals. If you also give importance to avoid suffering, you will find circumstances where both goals will conflict, and you will have to make choices. But that is not inconsistency. It is only the unavoidable fact that life is complicated, and we can never have it all. Your commitment to both values may still exist, even if sometimes they can’t both be achieved. But a commitment to an idea about how the world is seems to be a different action. It is not up to us to decide how the world is. That kind of commitment can cause people to become biased and unable to learn and change their minds.

Rational arguments sometimes have little effect on convincing us. It seems that emotional discourses have a much stronger effect, but the desire to belong to a group is certainly not our only emotional desire. We are far more complex individuals than that. We want, for example, to feel good about ourselves. Cohen and his team observed how that could influence our opinions. They tested how people react to information, including information with which they disagree. They observed a much stronger tendency to agree with the conflicting information when it was presented in a way that would improve the self-worth of the volunteers (Cohen et al. 2000).

Affective influences can have an impact on how we perceive many things and how we react to them. Of course, stronger emotions have a more significant impact (Clore and Schnall 2005). Politicians have known for a long time how to use our emotions to convince populations. The classical example is their common appeals to fear (Witte and Allen 2000). However, it is not only the basic emotions in a message that matter. Emotional states associated with certainty makes us feel more sure about our decisions than emotions associated with uncertainty (Tiedens and Linton 2001). Sounding confident, once more, convinces better. Interestingly, adopting views that might not fit with the majority of the group also seem to have an emotional side. Imhof and Lamberty (2017) observed that in supporters of conspiracy theories. In that case, it seems the need to feel special and unique might be linked to a stronger tendency to accept conspiracy beliefs.

Convincing others and ourselves should be a rational process, but it is far more about our emotions. That is a problem that might not have a permanent solution. We can, of course, learn to control some of the damage. For example, we should pay more attention to how some people try to manipulate us by using emotional discourse. Some people do it consciously; others are not aware but do it just the same. We might want to avoid situations where our emotions would work against our judgment, that is, if we care about getting as close as possible to true or best answers. If all we want is to fit into our groups, influence them, and ascend socially, we may already be very well adapted to the task.

Up to now, we have seen that our intuition and our feelings can fool us. We need to find ways around that problem. That means we now need to investigate if there are better tools we can use for our opinions and decision making.

References

Abelson, R. P., and D. A. Prentice. 1989. “Beliefs as Possessions: A Functional Perspective.” In Attitude Structure and Function. London: Psychology Press.

Amodio, D. M., J. T. Jost, S. L. Master, and C. M. Yee. 2007. “Neurocognitive Correlates of Liberalism and Conservatism.” Nature Neuroscience 10(10), 1246–47.

Asch, S. 1955. “Opinions and Social Pressure.” Scientific American 193(5), 31–35.

Asch, S. E. 1956. “Studies of Independence and Conformity: A Minority of One against a Unanimous Majority.” Psychological Monographs 70(416), 70.

Bakker, B., G. Schumacher, C. Gothreau, and K. Arceneaux. 2019. “Conservatives and Liberals Have Similar Physiological Responses to Threats: Evidence from Three Replications.” PsyArXiv, https://psyarxiv.com/vdpyt/.

Clore, G. L., and S. Schnall. 2005. “The Influence of Affect on Attitude.” In D. Albarracin, B. T. Johnson, and M. P. Zanna (Eds.), Handbook of Attitudes. Mahwah, NJ: Erlbaum.

Cohen, G. L., J. Aronson, and C. M. Steele. 2000. “When Beliefs Yield to Evidence: Reducing Biased Evaluation by Affirming the Self.” Personality and Social Psychology Bulletin 26(9), 1151–64.

Dominguez D, J. F., S. A. Taing, and P. Molenberghs. 2016. “Why Do Some Find It Hard to Disagree? An fMRI study.” Frontiers in Human Neuroscience 9, 718.

Eisenberger, N. I., M. D. Lieberman, and K. D. Williams. 2003. “Does Rejection Hurt? An fMRI Study of Social Exclusion.” Science 302(5643), 290–92.

Galton, F. 1907. “Vox Populi.” Nature 75(1949), 450–451.

Gilead, M., M. Sela, and A. Maril. 2018. “That’s My Truth: Evidence for Involuntary Opinion Confirmation.” Social Psychological and Personality Science 10(3), 393–401.

Hill, G. W. 1982. “Group versus Individual Performance: Are n+1 Heads Better Than One?” Psychological Bulletin 91(3), 517–39.

Imhoff, R., and P. K. Lamberty. 2017. “Too Special to Be Duped: Need for Uniqueness Motivates Conspiracy Beliefs.” European Journal of Social Psychology 47(6), 724–34.

Janis, I. L. 1972. Victims of Groupthink: A Psychological Study of Foreign-Policy Decisions and Fiascoes. Boston: Houghton Mifflin Company.

Jost, J. T., J. Glaser, A. W. Kruglanski, and F. J. Sulloway. 2003. “Political Conservatism as Motivated Social Cognition.” Psychological Bulletin 129(3), 339–75.

Kahan, D. 2010. “Fixing the Communication Failure.” Nature 463(7279), 296–97.

Kahan, D. M. 2013. “Ideology, Motivated Reasoning, and Cognitive Reflection.” Judgment and Decision Making 8(4), 407–24.

Kahan, D. M., E. Peters, E. C. Dawson, and P. Slovic. 2013. “Motivated Numeracy and Enlightened Self-Government.” Yale Law School, Public Law Working Paper No. 307.

Kahan, D. M., E. Peters, E. C. Dawson, and P. Slovic. 2017. “Motivated Numeracy and Enlightened Self-Government.” Behavioural Public Policy 1(1), 54–86.

Kanai, R., T. Feilden, C. Firth, and G. Rees. 2011. “Political Orientations Are Correlated with Brain Structure in Young Adults.” Current Biology 21(8), 1–4.

Kerr, N. L., and R. S. Tindale. 2004. “Group Performance and Decision Making.” Annual Review of Psychology 55, 623–55.

Klucharev, V., K. Hytönen, M. Rijpkema, A. Smidts, and G. Fernández. 2009. “Reinforcement Learning Signal Predicts Social Conformity.” Neuron 61(1), 140–51.

Koriat, A., S. Lichtenstein, and B. Fischhoff. 1980. “Reasons for Confidence.” Journal of Experimental Psychology: Human Learning and Memory 6(2), 107–18.

Lerner, J., and A. Lomi. 2018 (August). Diverse Teams Tend to Do Good Work in Wikipedia (But Jacks of All Trades Don’t), 214–21.

Lorenz, J., H. Rauhut, F. Schweitzer, and D. Helbing. 2011. “How Social Influence Can Undermine the Wisdom of Crowd Effect.” Proceedings of the National Academy of Sciences 108(22), 9020–25.

Luker, K. 1985. Abortion and the Politics of Motherhood. Berkeley: University of California Press.

Mercier, H. 2011a. “On the Universality of Argumentative Reasoning.” Journal of Cognition and Culture 11, 85–113.

Mercier, H. 2011b. “Reasoning Serves Argumentation in Children.” Cognitive Development 26(3), 177–91.

Mercier, H., and D. Sperber. 2011. “Why Do Humans Reason? Arguments for an Argumentative Theory.” Behavioral and Brain Sciences 34(2), 57–111.

Mercier, H., and D. Sperber. 2017. The Enigma of Reason. Cambridge: Harvard University Press.

Milgram, S. 1963. “Behavioral Study of Obedience.” Journal of Abnormal and Social Psychology 67(4), 371–78.

Nickerson, R. S. 1998. “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises.” Review of General Psychology 2(2), 175–220.

Perry, G. 2013. Behind the Shock Machine: The Untold Story of the Notorious Milgram Psychology Experiments. Brunswick: Scribe Publications.

Polavieja, G. D., and G. Madirolas. 2014. “Wisdom of the Confident: Using Social Interactions to Eliminate the Bias in Wisdom of the Crowds.” arXiv:1406.7578.

Rollwage, M., R. J. Dolan, and S. M. Fleming. 2018. “Metacognitive Failure as a Feature of Those Holding Radical Beliefs.” Current Biology 28(24), 4014–21.

Rudas, C., O. Surányi, T. Yasseri, and J. Török. 2017. “Understanding and Coping with Extremism in an Online Collaborative Environment: A Data-Driven Modeling.” PLoS ONE 12, no. 3 (March), 1–16.

Sears, D. O., and R. E. Whitney. 1973. Political Persuasion. Morristown, NJ: General Learning Press.

Sniezek, J. A., and R. A. Henry. 1989. “Accuracy and Confidence in Group Judgment.” Organizational Behavior and Human Decision Processes 43(1), 1–28.

Stanovich, K. E., R. F. West, and M. E. Toplak. 2013. “Myside Bias, Rational Thinking, and Intelligence.” Current Directions in Psychological Science 22(4), 259–64.

Surowiecki, J. 2005. The Wisdom of Crowds. New York: Anchor Books.

Taber, C. S., and M. Lodge. 2006. “Motivated Skepticism of Political Beliefs.” American Journal of Political Science 50(3), 755–69.

Tiedens, L. Z., and S. Linton. 2001. “Judgment under Emotional Certainty and Uncertainty: The Effects of Specific Emotions on Information Processing.” Journal of Personality and Social Psychology 81(6), 973–88.

Witte, K., and M. Allen. 2000. “A Meta-Analysis of Fear Appeals: Implications for Effective Public Health Campaigns.” Health Education & Behavior 27(5), 591–615.

Yaniv, I. 2011. “Group Diversity and Decision Quality: Amplification and Attenuation of the Framing Effect.” International Journal of Forecasting 27(1), 41–49.

Arguments, Cognition, and Science

Подняться наверх