Читать книгу The Research Experience - Ann Sloan Devlin - Страница 24

Logical Problems in Thinking: Hasty Generalization and Overreliance on Authorities

Оглавление

Among the logical problems in thinking that Shermer lists, he gives us “hasty generalization” (p. 56)—reaching conclusions before the evidence warrants—or faulty induction. Induction is reasoning from premises to a probable conclusion. In faulty induction, the conclusion is not warranted. People also describe this kind of thinking as stereotyping. As but one example, when we take a limited range of evidence about an individual and ascribe those qualities to the group of which the person is a member, we are stereotyping. A popular television show,1 The Big Bang Theory, had characters that embody stereotypes, whether Sheldon Cooper, the brilliant but interpersonally less skilled theoretical physicist, or Amy Farrah Fowler, who for some time was his “girlfriend” before becoming his wife. Those who faithfully watched the show will recall that initially Sheldon described Amy as a “girl” and his “friend” but not his “girlfriend” in the traditional meaning of the term. For long-term watchers of the show, the staying power of the series came through the evolution of these characters over time as they became less true to the stereotype they represented. But many individuals argue that the portrayal of these characters reinforced unfortunate and hurtful stereotypes about scientists and gender (Egan, 2015).

1Ranked seventh in prime broadcast network television shows in the United States the week of June 1, 2015, according to Nielsen (http://www.nielsen.com/us/en/top10s.html). The show ended in 2019 after 12 seasons.

Hasty generalization: Reaching decisions before evidence warrants, or faulty induction.

Faulty induction: Reasoning from the premises to a conclusion that is not warranted.

Hasty generalizations are a problem in many steps of the research process. We can consider the problem of hasty generalization when we talk about how much data are needed before conclusions are warranted. We can also include hasty generalization when we talk about sampling (see Chapter 11). Because humans are limited information processers and pattern seekers, we are eager to take information and package or categorize it; this process makes the information more manageable for us, but it may lead to errors in thinking.

A second kind of logical problem in thinking that Shermer lists is “overreliance on authorities” (pp. 56–57). In many cases, we accept the word or evidence provided by someone we admire without carefully examining the data. In the domain of research, we may have an overreliance on the published word; that is, we assume that when we read a published article, we should unquestioningly accept its data. Unfortunately, as we increasingly observe in academia, we should be far more skeptical about what has been published. Instances of fraud are numerous. Consider the case of fraud involving a graduate student, Michael LaCour (and Donald Green, the apparently unknowing faculty mentor), who published work in Science (LaCour & Green, 2014) showing that people’s opinions about same-sex marriage could be changed by brief conversations (http://retractionwatch.com/2015/05/20/author-retracts-study-of-changing-minds-on-same-sex-marriage-after-colleague-admits-data-were-faked/). LaCour apparently fabricated the data that were the basis of his article, and the story of how this came to light reinforces the idea that findings must be reproducible. Two then–graduate students at the University of California–Berkeley, David Broockman and Josh Kalla, are responsible for identifying the anomalies in LaCour’s data, which were revealed when these students from Berkeley tried to replicate the study. This revelation quickly led to the identification of other inconsistencies (e.g., the survey research firm that was supposed to have collected the data had not; no Qualtrics file of the data was ever created).

Overreliance on authorities: Trusting authorities without examining the evidence.

Qualtrics: Online platform for survey research.

Reproducibility Project: Project in which researchers are trying to reproduce the findings of 100 experimental and correlational articles in psychology.

The broader issue of reproducibility has been in the news recently with what is known as the Reproducibility Project (https://osf.io/ezcuj/), in which scientists are trying to reproduce the findings of 100 experimental and correlational articles in psychology published in three journals. The results (Open Science Collaboration, 2015) have been less than encouraging as many replications produced weaker findings than the original studies did. The authors emphasize that science needs both tradition (here, reproducibility) as well as innovation to advance and “verify whether we know what we think we know.”

Simply because an article has been published does not make it good science. Even well-known researchers publish articles that contribute little to the literature. In Chapter 2, you will see the need to take into account the standards of particular journals (e.g., their acceptance rates, scope of research they publish, and rigor of methodology) rather than treating the work in all journals as equal. Relying on authority without questioning the evidence leads to mistakes in repeating what might have been weak methodology, for example. As Julian Meltzoff (1998) stated in his useful book about critical thinking in reading research, we should approach the written (here, published) word with skepticism and always ask, “show me.” Meltzoff went on to say, “Critical reading requires a mental set of a particular kind,” and he believed this mental set can be “taught, encouraged, and nurtured” (p. 8). The value of a particular argument has to be demonstrated with evidence that stands up to rigorous questioning. In regard to the research process, being willing to challenge authority by asking questions is an essential skill.

The Research Experience

Подняться наверх