Читать книгу Communicating Science in Times of Crisis - Группа авторов - Страница 22

Typologies of Dismisinformation

Оглавление

There have been many expeditions into the task of typologizing forms of both truth (Zimmer et al., 2019) and deception (e.g., Clementson, 2017; de Regt et al., 2020; Garrett et al., 2019; Pawlick et al., 2019), and theorizing their enactments, detection and management (e.g., Buller & Burgoon, 1996; Levine et al., 2016; Walczyk et al., 2014). Hopper and Bell (1984) conceptualized six types of interactional deception: fictions (e.g., make-believe, exaggeration, irony, white lies, etc.), playings (e.g., jokes, teases, kidding, trick, hoax, etc.), lies (e.g., dishonesty, fib, lie, etc.), crimes (e.g., conspiracy, entrapment, counterfeit, forgery, fraud, etc.), masks (inveigling, hypocrisy, back-stabbing, concealment, evasion, etc.), and unlies (e.g., distortions, false implications, misrepresentations). O’Hair and Cody (1994) distinguished five types of deception (i.e., lies, evasions, collusions, concealments, and overstatements), differentiated across four motive types defined by the dimensions of target (self vs. other) and outcome valence (positive vs. negative): egoism (lying to help self in a benign way), benevolence (lying to help others), exploitation (lying to help self in a way that hurts others), and malevolence (lying to hurt others). Xiao and Benbasat (2011) differentiated three types of deception: concealment, equivocation, and falsification. Curtis and Hart (2020) identified six types, labeled omissions, failed deceptions, half-truths, white lies, distortions, and blatant lies. Levine et al. (2016) proposed 10 pan-cultural motives for deception: personal transgression, economic advantage, non-monetary personal advantage, social-polite, altruistic, self-impression management, malicious, humor-joke, pathological, and avoidance. Bryant (2008) distinguished real lies, white lies, ambiguous gray lies, and justifiable gray lies along the five dimensions of intention, consequences, beneficiary, truthfulness, and acceptability. In conceptualizing the “dark side of information behavior,” Stone et al. (2019) differentiated deliberate falsifications, sins of omission, sins of commission, and system or process problems.

Based on Pareto advantage concepts from economics and game theory, Erat and Gneezy (2012) distinguished four types of black and white lies based on the two dimensions of sender advantage or disadvantage and receiver advantage or disadvantage. Pareto improvement refers to situations in which at least one party is better off without making any other party worse off. White lies here are conceptualized as those that increase payoffs for other(s), and black lies are those that decrease the payoffs of the other(s). This approach also informed a lie typology by Cantarero et al. (2018), who differentiated valence (protective/loss-oriented vs. beneficial/gain-oriented lies) by the beneficiary: the liar/self-oriented lies, the liar and others/pareto-oriented lies, or other(s)-oriented lies. Another game-theoretic typology proposed five types of deception models (Kopp et al., 2018), with three forms of channel attack (overt degradation: generate noise; denial: blind/saturate victim; and covert degradation: hide message in noise) and two forms of processing attack (corruption: mimic real message and subversion: subvert processing).

Closer to the construct of fake news, Berduygina et al. (2019) distinguished two types: unintended misinformation and deliberate misinformation. Karlova and Fisher (2013) and Rubin (2019) differentiated misinformation (i.e., inaccurate information) from disinformation (i.e., deceptive information), arguing that “since misinformation may be false, and since disinformation may be true, misinformation and disinformation must be distinct, yet equal subcategories of information” (p. 6). Hastak and Mazis (2011) conceptualized five types of misleading claim: omission of material facts, misleadingness due to semantic confusion, intra-attribute misleadingness, inter-attribute misleadingness, and source-based misleadingness (see Table 2.2).

Table 2.2 Truthful but misleading claim typology.

Claim TypeDefinitionClaim SubtypesExamplesRelevant Theories
Omission of material factsKey fact or facts have been omittedPure omissionHalf-truthsFailure to disclose gastro-intestinal upset caused by drug“Free” offers do not disclose relevant termsSchema theoryGrice’s theory of conversational norms
Misleadingness due to semantic confusionUse of unclear or deliberately confusing language, symbols, or images“Fresh” product contains artificially processed ingredientsPragmatic implication
Intra-attribute misleadingnessClaim about an attribute leads to misleading inference about the same attributeAttribute uniqueness claimsAttribute performance claims“No X” claim meant to imply competitors have X“Contains X” claim meant to imply substantial amount of XFeature-absent inferencesPragmatic implications
Inter-attribute misleadingnessClaim about an attribute leads to misleading inference about another attribute“Low X” claim meant to imply a low amount of an associated YLogical or probabilistic tie consistency
Source-based misleadingnessEndorsement by expert or consumer testimonial is biasedExpert sourceTypical sourceMultiple sourcesA surgeon endorses a dietary productAn extreme weight loss testimonial is not representativeClaim “recommended by X% (N) of sources”Source credibilitySource homophilySocial proof

Source: Hastak and Mazis (2011). © 2011 SAGE Publications.

Scholars are increasingly contemplating the range of dismisinformation in digital polymediated environments. Wardle and Derakhshan (2018; see also: Waldrop, 2017) proposed a tripartite Venn diagram model differentiating misinformation, disinformation, and malinformation (see Figure 2.2). This typology arranges the use of information along a continuum from relatively objective informational distortion or falsity to a more subjective interpretation of intent to harm, rather than intent to deceive.


Figure 2.2 Venn diagram typology of Mis-/Dis-/Mal-Information spectrum. Source: Adapted from Wardle and Derakhshan (2018) and First Draft.

A few typologies organize mediated deception across two bisecting dimensions. Tandoc et al. (2017) proposed crossing the level of facticity with the intent to deceive, resulting in four quadrants (see Table 2.3). Ferreira et al. (2020) proposed a typology of fake news in branding and marketing based on the dual dimensions of source of construction and the veridicality with real or fictional events. Internally constructed fake news is generated by a source directly or closely connected to the referent and/or reference of the information, whereas externally constructed fake news is generated by a source that has little or no direct connection with the subject of the news. The result is a quadrant topography (see Figure 2.3). This typology explicitly recognizes the tendency of many forms of fake news to employ frames or contents of valid information as a way of enhancing the appearance of credulity.


Figure 2.3 Quadrant typology of deception forms. Source: Ferreira et al. (2020). © 2020, Emerald Publishing Limited.

Table 2.3 A typology of fake news definitions.

Level of Facticity Author’s Immediate Intention to Deceive
High Low
High Native advertising Propaganda News satire
Low Fabrication News parody
Source: Adapted from Tandoc et al. (2017).

UNESCO (2018) adopted a unidimensional spectrum of intent to deceive in differentiating seven types of what they describe as information disorder (see Figure 2.4). This typology was intended to assist in ethical journalism education and practice. As such, it distinguishes both genres (e.g., satire or parody) and various degrees of falsification, largely arrayed by the quantity of fabricated content. UNESCO further differentiated the various dimensions along which information disorders vary (Table 2.4). Some of these dimensions are particularly insightful, such as the recognition of the increasing role of AI and bots in generating false content.


Figure 2.4 Deceptive intention spectrum of information distortion. Source: Adapted from UNESCO (2018), attributed to firstdraftnews.org.

Table 2.4 Dimensions of information disorder. Source: Based on UNESCO (2018).

Dimension Exemplars
Agent Actor type: Level of organization: Type of motivation: Level of automation: Intended audience: Intent to harm: Intent to mislead: Official/Unofficial None/Loose/Tight/Networked Financial/Political/Social/Psychological Human/Cyborg/Bot Members/Social Groups/Entire Societies Yes/No Yes/No
Message Duration: Accuracy: Legality: Imposter type: Message target: Long-term/Short-term/Event-based Misleading/Manipulated/Fabricated Legal/Illegal No/Brand/Individual Individual/Organization/Social Group/Entire Society
Interpreter Message reading: Action taken: Hegemonic/Oppositional/Negotiated Ignored/Shared in support/Shared in opposition

Vraga and Bode (2020) sought to frame misinformation in a context that recognizes the normative nature of truth or reality. Given the philosophical and epistemological challenges of determining a ground state of truth, Vraga and Bode suggested a continuum of how settled and warranted the reality is in its discrepancy from the information provided (see Figure 2.5). They thereby recommend three relative states through which information can progress or shift: from controversial to a more emergent reality to more settled truth status. This typology seems well-suited to discussions of the particular forms of dismisinformation of pseudoscience and conspiracy theories.


Figure 2.5 Contextual typology of misinformation. Source: Adapted from Vraga and Bode (2020).

While most of these typologies of dismisinformation have been deductive in nature, other approaches have been more inductive in development. Kalyanam et al. (2015) used coder annotation and machine learning to automatically classify “credible” and “speculative” tweets regarding the Ebola outbreak. Sell et al. (2020) examined a 1% sample of all tweets between September 30 and October 30 during the 2014 Ebola outbreak, focusing on a random subsample of the 72,775 tweets in English mentioning “Ebola.” They coded this tweets subset (N = 3,113) for their veracity (true, false, and partially false) and if their intent was a joke, opinion, or discord. Of the non-joking tweets, 5% contained false information and another 5% contained partially false/misinterpreted information, often consisting of debunked rumors. Importantly, the misinformation tweets were more likely than the true tweets to be discord-inducing (45% vs. 26%), or tweets designed to evoke conflict from other Twitter users. Similarly, Oyeyemi et al. (2014) distinguished “medically correct information,” “medical misinformation,” and “other” (e.g., spiritual) tweets about Ebola in three countries in west Africa and found that most (55.5%) tweets and retweets contained misinformation, with a potential reach of over 15 million potential readers. Jin et al. (2014) examined 10 common rumors in tweets related to the Ebola outbreak in September through late October 2014 and found that although rumors were common, “they were a small fraction of information propagated on Twitter” (p. 91) and were “more localized, distributed and comparatively smaller in permeation than news stories” (p. 92). Brennen et al. (2020) analyzed 225 pieces of misinformation about COVID-19 from a news fact-checking service, 88% of which were from social media platforms. They distinguished what they referred to as reconfiguration (i.e., “where existing and often true information is spun, twisted, recontextualized, or reworked,” which constituted 59% of the instances) from completely fabricated instances, which represented 38% of the information (p. 1). They further distinguished reconfigured information as misleading content (29%), false context (24%), or manipulated content (6%), whereas fabricated content was divided between imposter or impersonation content (8%) and fabricated content (30%). A remaining 3% of the messages represented satire or parody.

Based on over 20 million tweets across over 4 million users commenting on the 2018 state of the union address and 2016 presidential election, Bradshaw et al. (2020) developed an inductively generated typology of fake news based on five a priori criteria: professionalism (i.e., “purposefully refrain from providing clear information about real authors, editors, publishers, and owners, and they do not publish corrections of debunked information,” p. 176); counterfeit (i.e., “sources mimic established news reporting by using certain fonts, having branding, and employing content strategies,” p. 176); style (i.e., “propaganda techniques to persuade users at an emotional, rather than cognitive, level,” p. 177); bias (i.e., “highly biased, ideologically skewed” publishing “opinion pieces as news,” p. 177); and credibility (i.e., “report on unsubstantiated claims and rely on conspiratorial and dubious sources,” p. 178). The result was a five-category typology of political news (professional news outlets, professional political sources, divisive and conspiracy sources, other political news and information, and “other”), allowing a direct dichotomous comparison between “professional” news outlets and “divisive and conspiracy sources.”

Fake news takes numerous potential forms of misinformation in the transmedia environment (e.g., Tandoc et al., 2017), including “false connection (subtitles that do not correspond to the content), false context, context manipulation, satire or parody (without explicit intentionality), misleading content (misuse of data), deceiving content (use of false sources), and made-up content (with the intention of manipulating public opinion and harming)” (Alzamora & Andrade, 2019, p. 110). Just as importantly, however, are the distinctions between fake news and some of its conceptual cousins that would be excluded from such definitions or operationalizations of fake news. For example, fake news is distinct from (i) unintentional informational mistakes, (ii) rumors that do not derive from news, (iii) conspiracy theories, which are likely to be believed as true by their propagators, (iv) satire not intended to be factual, (v) false statements made by politicians, and (vi) messages intended and framed as opinion pieces or editorials (Allcott & Gentzkow, 2017). Others have attempted to distinguish “serious fabrications,” “large scale hoaxes,” and “humorous fakes” such as stories in The Onion (Bondielli & Marcelloni, 2019). There are, however, gray areas among these. For example, a politician’s false statements that are reported without any critical concern for their veracity (i.e., reported as a priori factual or potentially factual), or conspiracy theories that contain or rely upon verifiably false claims, may well overlap fake news, especially when news reporting itself gets duped by such false forms of information. Alternatively, conspiracy theories have been typologized by the extent to which they reflect (i) general versus specific content and structure, (ii) scientific versus non-scientific topics, (iii) ideological versus neutral valence, (iv) official versus anti-institutional agendas, and (v) alternative explanations versus denials (Huneman & Vorms, 2018).

Another example of a gray area in such typologies is conspiracy theories that are not disprovable at a given point in time and that may be plausible and feasible yet do not meet professional standards of veracity. For example, rumors regarding COVID-19 that the SARS-CoV-2 virus originated in a laboratory appears to be plausible to approximately a third of the US population, with 23% believing it was engineered, and 6% believing it escaped accidentally from a laboratory and another 25% indicating they are unsure of its origins (Schaeffer, 2020). As these narratives fit with certain political agendas of rhetorical scapegoating, and given that the contrary narrative of natural zoonotic infection (Calisher et al., 2020; CDC, 2019) is merely the relative consensus of scientists, it is difficult to know precisely how to categorize such “news.”

Technologically adapted forms of dismisinformation present a complicated category. For example, one “category of social bots includes malicious entities designed specifically with the purpose to harm. These bots mislead, exploit, and manipulate social media discourse with rumors, spam, malware, misinformation, slander, or even just noise” (Ferrara et al., 2016, p. 98). The role of machines (Schefulele & Krause, 2019), bots, algorithms, AI, and “computational propaganda” (Bradshaw & Howard, 2018) increasingly need to be included in typologies of misinformation—the logics may be intentional, but the information itself upon which such logics are applied, may or may not be intentionally fake, or may be intended more to sew chaos or political division rather than mislead per se.

Such malign uses of bots have already begun to be employed for political purposes. A study of tweets about the presidential election in 2016 and the subsequent state of the union address found that almost twice as many polarizing and conspiracy tweets (27.8%) involved amplifier accounts (bots) as professional news outlets (15.5%) (Bradshaw et al., 2020). It is unsurprising, therefore, that bots are beginning to play a role in disease outbreaks and the public response to those outbreaks. For example, bots are often designed with political purpose and intent and algorithmically designed to engage in trend hijacking or a tendency to “ride the wave of popularity of a given topic … to inject a deliberate message or narrative in order to amplify its visibility” (Ferrara, 2020b, p. 17). In this large social media dataset, bot accounts were substantially more likely to be the carriers of alt-right conspiracy theories compared to human accounts (Ferrara, 2020a).

“Though spam is not always defined as a form of false information, it is somehow similar to the spread of misinformation” that facilitates or promotes “the ‘inadvertent sharing’ of wrong information when users are not aware of the nature of messages they disseminate” (Al-Rawi et al., 2019, p. 54). Graham et al. (2020) identified a bot cluster of tweets including misinformation and disinformation regarding mortality statistics in Spain and Mexico, many of which contained graphic images of people with body disfigurements and diseases. Yet, there was no immediately discernable malicious intent or objective to the tweet stream. In other instances, the distinction between routine political polarization and identity politics, and disinformation, may be difficult to ascertain. For example, in the Graham et al. (2020) data, one bot cluster of tweets constituted a positive message campaign for the Saudi government and their Crown Prince, along with Islamic religious messages, aphorisms, and memetic entertainment techniques as click-bait. Another bot cluster was more extreme in its partisanship, representing tweets critical of Spain’s handling of the epidemic and hyper-partisan criticisms and complaints suggesting the government was fascist. Thus, the role of “intention” becomes problematized in operationalizing fake news, misinformation, and conspiracy theory in software-based mediation contexts.

The importance of this particular form of dismisinformation is suggested by a study of 14 million tweets sent by over 2.4 million users. They found that mentions of CNN were a dominant theme, and there was “not a single positive attribute associated with CNN in the most recurrent hashtags,” indicating “that conservative groups that are linked to Trump and his administration have dominated the fake news discourses on Twitter due to their activity and use of bots” (Al-Rawi et al., 2019, p. 66). Another study of 14 million Twitter messages found that “social bots played a disproportionate role in spreading articles from low-credibility sources. Bots amplify such content in the early spreading moments, before an article goes viral. They also target users with many followers through replies and mentions” (Shao et al., 2018, p. 1). Gallotti et al.’s (2020) analysis of 112 million messages across 64 languages about COVID-19 estimated that approximately 40% of the online messages were bots. Another study of 43.3 million English language tweets found that “accounts with the highest bot scores post about 27 times more about COVID-19 than those with the lowest bot scores” (Ferrara, 2020b, p. 8).

Other message clusters may be designated as intentional forms of shaping or reinforcing tactics rather than explicitly false information. For example, some of the Russian Internet Research Agency (IRA, or GRU) campaign was designed to amplify certain stances by increasing the flow of false posts to selected audiences as if they were from real persons (Lukito et al., 2020; Nimmo et al., 2020), but such efforts can simply machine-replicate actual persons’ posts with the intent to drown out competing messages or to reinforce or polarize differences in opinions. Such messages might not be explicitly false—they are simply amplified through replication and distribution and then targeted in ways that alter the appearance of the vox populi, not unlike traditional forms of mass communication.

Indeed, many of Russia’s IRA-generated tweets were able to zoonotically cross the social media—traditional media—barrier and make their way into traditional news media stories. Lukito et al. (2020) identified 314 news stories from 71 of the 117 media outlets searched that quoted tweets generated by the IRA between January 1, 2015 and September 30, 2017. These tweets generally expressed opinions posed as if they derived from everyday American citizens. An exemplar of an opinion tweet was in reference to the Miss USA in 2017: “New #MissUSA says healthcare is a privilege and not a right, and that she’s an ‘equalist’ not a feminist! Beauty and brains. She is amazing!” (Lukito et al., 2020, p. 207). Of those IRA tweets that were primarily informative in nature, “contrary to some popular discourses about the functions and effects of the IRA disinformation operation, the preponderance of IRA tweets drawn on for their informational content (119 of 136 stories, 87.5%) contained information that was factually correct” (Lukito et al., 2020, p. 208). The exemplar was a tweet about how “Security will be dramatically increased at Chicago’s gay pride parade” (Lukito et al., 2020, p. 208). In either instance, there is little in the content of the individual tweets that appears insidious or malevolent. However, to the extent they alter the appearance of the actual vox populi, they may function to shape the collective discourse and public opinion predicated or reinforced by such perceived norms of opinion and attitude.

Communicating Science in Times of Crisis

Подняться наверх