Читать книгу Whether to Kill - Stephanie Dornschneider - Страница 8

Оглавление

Chapter 1

A Cognitive Mapping Approach to Political Violence

Cognitive maps identify the reasoning processes by which human beings decide to engage in certain behavior. Representing a large range of factors, they capture the complex “subjective reality” that motivates people to behave in certain ways (Renshon 2008: 822). In political science, cognitive mapping has been a valuable approach to explore policy decisions. However, applications have been widely abandoned because cognitive maps are highly complex: typically, they contain reasoning processes that consist of more than a dozen beliefs and connections between beliefs.

This abandonment is unfortunate because cognitive mapping has at least two major advantages. First, it overcomes the gap between human behavior and the structures in which humans act. This is achieved by modeling human behavior as decisions that are motivated by beliefs about various types of internal factors (for example, feelings of fear) or external factors (for example, conditions of poverty). Second, as indicated by increasing applications of cognitive mapping in other disciplines, such as computer science, engineering, economics, and medicine, the approach allows systematic investigation of the mechanisms underlying human behavior. Specifically, these mechanisms are modeled as direct and indirect connections between beliefs and decisions for actions (chains of beliefs).

As noted in the Introduction, to cope with the complexity of cognitive maps, this book presents a computational model formalizing cognitive maps into directed acyclical graphs (DAGs). This formalization is based on Pearl’s theory of causality (2000). It provides new possibilities not only for the application of cognitive mapping but also for the study of counterfactuals. Specifically, the model enables the researcher to (1) systematically identify beliefs that motivate, or fail to motivate, decisions to engage in certain behavior; (2) systematically trace chains of beliefs that encourage certain decisions; and (3) explore counterfactuals, which show what would have prevented people from deciding to engage in certain behavior.

To study counterfactuals, the model intervenes on the belief systems of political actors and examines their behavior in alternative worlds following from this intervention. This allows us to explore alternative worlds in which the individuals would not have decided to take up arms (Chapter 6). Intervening on the actors’ beliefs about the world rather than on the world itself, this analysis bridges the gap between actors and political structures. This presents a new approach to the study of counterfactuals (cf. Fearon 1991; Sylvan and Majeski 1998; Tetlock and Belkin 1996), and is to my knowledge the first application of cognitive mapping to the study of counterfactuals.

This chapter is dedicated to introducing the cognitive mapping approach, the formalization of cognitive maps into DAGs, and its application to the study of counterfactuals. The first section introduces the cognitive mapping approach, and presents the main components of cognitive maps: beliefs, belief connections or inferences, and decisions for actions. The second section presents the formalization of cognitive maps, following Pearl’s theory of causality, and its application to counterfactuals via external interventions.

Cognitive Mapping

According to Axelrod (1976: 8–9), the roots of cognitive mapping lie in at least four fields: (1) psycho-logic, (2) causal inferences, (3) graph theory, and (4) evaluative assertion analysis.1 While several researchers working in these fields have been political scientists, the first rigorous application of cognitive mapping to studies of political science was presented by Robert Axelrod in 1976 in Structure of Decision. It had the practical goal of helping policy-makers reach better decisions.

Cognitive maps are illustrations of belief systems. They consist of three major components, which I present in this chapter: (1) beliefs, (2) belief connections or inferences, and (3) decisions for actions. Specifically, cognitive maps visualize beliefs and decisions as text in circles, and belief connections as arrows. Beliefs are located in circles that also have arrows pointing away from them.2 Decisions are located in circles that only have arrows pointing toward them.3 The following figure shows an excerpt from a cognitive map that I constructed for this research.

As I show below, cognitive maps allow the researcher to systematically trace chains of beliefs that are antecedent to decisions. These chains represent the complex microlevel mechanisms underlying human behavior, drawing on inside categories provided by the actors themselves. They are complex representations of the subjective realities that motivate people to engage in certain behavior, rather than representations of an external selection of certain factors and not others. Cognitive mapping thus contributes to methods that are based on external categories assigned by the researcher, and which focus more on the direct relations between particular variables and behavior, rather than on the complex microlevel mechanisms underlying this behavior.

As mentioned in the Introduction, cognitive mapping also offers to synthesize and put into perspective the literature on particular behaviors. Specifically, the belief systems represented by cognitive maps consist of beliefs about various types of factors, which are usually addressed by different theories. For example, related to violence, one can hold beliefs about religious norms like God forbids killing of innocent people (cf. cultural-psychological theories of violence); about economic conditions like poverty (cf. environmental-psychological theories of violence); or about interacting with violent groups like meeting members of al-Qaeda (cf. group theories of violence). In applying cognitive mapping, I first construct cognitive maps from interviews with violent individuals (Chapter 4). Second, I analyze the maps to identify chains of beliefs that are antecedent to decisions to engage in political violence (Chapter 5). Third, I intervene on the cognitive maps to model counterfactuals and explore alternative worlds in which the individuals would not have decided to take up arms (Chapter 6).

This application involves various methods. Specifically, my construction of cognitive maps draws on the textual analysis of my interviews and applies Spradley’s theme analysis to abstract the individuals’ beliefs into comparable categories. The analysis of the cognitive maps and the counterfactual analysis draw on a computational model developed for this study. The model formalizes cognitive maps into DAGs, a formalization presented in the second part of this chapter.

Numerous studies, mostly in the field of foreign policy, have applied the cognitive mapping approach. Some examples are Alastair Iain Johnston’s analysis of Chinese strategic culture (1995), Matthew Bonham, Victor Sergeev, and Pavel Parshin’s examination of international negotiations (1997), Jonathan Klein’s and Dale Cooper’s analysis of military officers (1982), or Tuomas Tapio’s doctoral thesis about cooperation in foreign economic policy (2003). But most political scientists have nevertheless abandoned cognitive mapping because of the maps’ complexity.


Figure 2. Excerpt from the cognitive map of a Muslim Brother.

By contrast, cognitive mapping has been applied by researchers from various other disciplines. Indeed, as Elpiniki Papageorgiou and Jose Salmeron note in their review of fuzzy cognitive mapping over the past decade, the approach has “gained considerable research interest” (2013: 66; my italics). Examples range from economics (Lee et al. 2012; Zhang, Shen, and Jin 2011; Krüger, Salomon, and Heydebreck 2011), to engineering (Mendonca et al. 2013; Zarandi et al. 2012; Bhatia and Kapoor 2011), to medical studies (Georgopoulos and Stylios 2013; Giabbanelli, Torsney-Weir, and Mago 2012; Papageorgiou 2011), geography (Soler et al. 2012), and biology (Wills et al. 2010; Wehner and Menzel 1990). Indeed, cognitive maps have become a subject of research themselves (Peng, Wu, and Yang 2011; Miao 2010; Eden 2004; Montello 2002; Nadkarni and Shenoy 2001; Brotons 1999; Chaib-Draa and Desharnais 1998; Young 1996).

What Is New About the Application of Cognitive Mapping in This Book

What all applications of the cognitive mapping approach have in common is that they investigate behavior by focusing on the actors’ belief systems. This makes cognitive mapping a powerful tool to investigate human behavior, bridging the gap between actors and external structures, and allowing study of the mechanisms underlying human behavior. What is new about this book is the application of cognitive mapping to the study of violent individuals. It is extremely difficult to identify and locate violent individuals, and to convince them to consent to be interviewed. Therefore, violent individuals who agree to be interviewed add a group of particular interest to the existing literature on political violence. Several researchers have conducted interviews with violent individuals, but to my knowledge none of them has applied the cognitive mapping approach to analyze these interviews.

Moreover, my construction of cognitive maps from ethnographic interviews adds to the more general cognitive mapping literature, which often uses policy transcripts or public speeches. The individuals who participated in these interviews not only include individuals who engaged in violence, but also individuals who engaged in nonviolent activism, both Muslims and non-Muslims. This diversity adds analytical rigor to the cognitive maps by involving control groups that often remain absent from the study of political violence.

What is also new about the application of the cognitive mapping approach in this book is the formalization of cognitive maps into DAGs. As described, this formalization copes with the complexity of cognitive maps that has led to the abandonment of the approach in political science. Based on recent literature in graph theory and computer science (see Pearl 2000; Koller and Friedman 2009), this formalization cannot only be used to rigorously compare the cognitive maps of different individuals but also to model alternative worlds in which individuals would not have decided to engage in certain behavior.

Part 1: Main Elements and Structure of Cognitive Maps

Beliefs

Beliefs are a major component of cognitive maps. Specifically, beliefs identify the factors motivating human behavior. In this study, beliefs identify the factors motivating individuals to take up arms, or to refrain from doing so (see Chapters 46).

Beliefs are usually defined as mental states.4 More specifically, beliefs are “a person’s subjective probability that an object has a particular characteristic (for example, how sure the person feels that ‘This book is interesting’ …)” (Fishbein and Ajzen in Oskamp and Schultz 2005: 11). Since they are held by individuals, beliefs are by nature subjective. However, beliefs may be inter-subjective or shared (it is possible to say “we believe in X”; cf. Bar-Tal 2000). This is possible because many beliefs address observations that are accessible to anybody. As Nilsson writes, “I believe I exist on a planet that we call Earth and that I share it with billions of other people” (2014: 1). Such beliefs are called true beliefs or knowledge. Since they address observations, true beliefs are verifiable by a perspective external to the subject who holds them. Beliefs can also address other factors, which may not be observable. For example, they can address abstract ideas, such as today is Monday; moral rules, such as it is forbidden to kill somebody; religious beliefs, such as God exists; feelings like I am happy; social encounters like I am visiting my brother; or even assumptions that contradict observations in the world, such as all swans are black. Table 4 gives an overview.

Table 4: Subjective, Intersubjective, and True Beliefs

Subjective beliefs A subject believing something to have a particular characteristic
Intersubjective beliefs Several subjects believing something to have a particular characteristic
True beliefs Are believed to have a particular characteristic (verifiable from an external perspective)

When theorizing about political behavior, it is important whether the beliefs held by the actors are true beliefs, intersubjective, or purely subjective. The most significant beliefs are true beliefs, rather than intersubjective or purely subjective beliefs, because true beliefs identify factors that can be verified from an external perspective. As I elaborate in Chapters 4 and 5, the major beliefs identified by this study are true beliefs. These show that violent individuals are neither mentally ill nor driven by religious beliefs. More specifically, they show that both violent and nonviolent activism are primarily motivated by state aggression. Whether beliefs are true, intersubjective, or subjective is indicated by what the beliefs address and is called propositional content.5 The form of propositional content is (I believe) that X. For example, the propositional content of my belief that there is a car in front of my house is “that there is a car in front of my house”; the propositional content of my belief that tables can talk is “that tables can talk”; and the propositional content of my belief that lying is wrong is “that lying is wrong.”

Different propositional contents may identify different types of beliefs, and the following paragraphs identify six types of beliefs. Rather than being exhaustive, these types show that beliefs can be used to study various types of factors, such as observations, abstract ideas, social norms, and feelings.

The first type addresses observable things that can be verified in the external world by one’s senses (sight, hearing, taste, smell, touch). Examples of such beliefs are “I believe that dogs have four legs,” “I believe that there is a desk in my office,” or “I believe that fish live in water.” Since these beliefs are verifiable from a perspective external to the subjects holding them, they can be called true beliefs.6

The second type addresses something that logically contradicts an observable thing. Some examples are “I believe that tigers are pink,” “I believe that trees can fly,” or “I believe that the world is flat.” Based on verification in the external world, beliefs that address such propositions can be called false beliefs. Although what is addressed by their propositional contents is false from empirical evidence, people may nevertheless hold such beliefs, for example when they dream, when they deny certain things, or when they hallucinate.

The third type addresses abstract ideas. Abstract in this sense means that what is described by this type of belief cannot be perceived by one’s senses. Some examples are nationality, time, or religion. Nevertheless, abstract ideas may be verifiable in the external world by certain things or words. For example, it is possible to verify my belief that I am Australian by checking my passport; it is possible to verify my belief that I am unpopular by asking people who know me what they think about me; and it is possible to verify my belief that it is the year 2060 by looking at a calendar. Thus, beliefs of this type may be true beliefs or false beliefs.

Nevertheless, some beliefs of this type cannot be verified in the external world. Such beliefs cannot be true beliefs or false beliefs. Examples are beliefs about religion, such as “I believe that Jesus rose from the dead,” or “I believe that Mohammad is the prophet of God.” Nonreligious examples include “I believe that everybody has human dignity,” or “I believe that I am destined to become a lawyer.” Although such beliefs cannot be verified in the external world, it is possible that they are held by several people. Therefore, they may be intersubjective.

The fourth type addresses something that has not been observed but that may be observable in the future. An example is “I believe that aliens exist.” Although these beliefs do not contradict anything that has been observed, they cannot be verified (yet) by observation. They can consequently not be called true beliefs or false beliefs. Nevertheless, they may be intersubjective.

The fifth type addresses emotions. Examples of such beliefs are “I believe he is very angry,” or “I believe that I cannot bear this any longer.” Since they address something that is felt by human beings, these beliefs have a strong subjective dimension. However, like beliefs themselves, feelings may be shared, and different individuals may hold the same feelings about the same things. For example, a lot of people felt fear after 9/11. Beliefs about feelings may therefore be intersubjective. Moreover, they may be verifiable in the external world: “I believe that he is very angry,” for instance, may be verifiable by an observation in which the person addressed by “he” actually shouts out “I am so angry.”7

Table 5: Typology of Beliefs


The sixth type addresses moral norms. Examples of such beliefs are “I believe that it is wrong to kill somebody,” or “I believe that nobody should lie.” Moral norms cannot be perceived with one’s senses, but like feelings they may manifest themselves in observable behavior, for example in telling the truth.8 Because of this, they may be true beliefs. Moreover, they can be intersubjective. Referring to the example above, it is possible that more than one person believes that “it is wrong to kill somebody.”

Table 5 provides an overview. Recall that the main purpose of this typology is not to be exhaustive but to show that beliefs can be used to study different types of factors, and that the most important distinction indicated by this typology is that between true beliefs and all other beliefs. The findings of the following analysis show that the most significant beliefs underlying political violence are true beliefs, rather than intersubjective or purely subjective beliefs. This shows that political violence is a response to things that exist in the world, rather than to religious beliefs, or even false beliefs. It shows that political violence is not cultural or a form of mental illness, and that the reasoning processes connected to it are surprisingly similar to those underlying mainstream political behavior.

Belief Connections

Another major component of cognitive maps are the connections between beliefs, also called inferences. These connections reveal the complex mechanisms by which certain factors, represented as beliefs, motivate humans to engage in certain behavior. In this study, belief connections identify the microlevel mechanisms motivating individuals to take up arms, or to refrain from doing so (see Chapters 46).

Belief connections indicate people’s subjective probability that an object has a particular characteristic in relation to the particular characteristic of another object, or in relation to another characteristic of the same object. Belief connections further indicate the logical order of this relation. In the words of Stenning and van Lambalgen: “the psychology of reasoning and logic are in a sense about the same subject” (2008: 3). More specifically, belief connections consist of beliefs that are coherent or directed within certain belief contexts.9

Coherence

Coherent connections address objects whose characteristics are logically consistent. Take the example of B1 “I believe that dogs have wings” and B2 “I believe that dogs can fly.” Both beliefs describe the same object (dogs). Moreover, B1 offers information about what dogs can do with wings (fly), and B2 about how dogs can fly (by using their wings). B1 and B2 can therefore be considered coherent (see Figure 3).

As a contrast, consider the example of B2 “I believe that dogs can fly” and B3 “I believe that dogs cannot fly.” The propositional contents of these beliefs also address the same object (dogs) and may therefore appear to be connected in a similar way. However, B2 and B3 also address particular characteristics of dogs that are contradictory: “can fly” versus “cannot fly.” This contradiction indicates that B2 and B3 cannot be considered coherent (see Figure 3).10 Rather, they are incoherent.

These examples suggest that coherence is the same as logical consistency. However, it is helpful to add that some researchers have put forward the stronger notion of “continuity of senses” to define coherence (De Beaugrande and Dressler 1981, chap. 5). Continuity of senses means that two beliefs cannot only be considered connected but also to complement each other. In the example above, one could say that B1 complements B2 (by offering information about what dogs can do with wings).

Figure 3. Example of a coherent and incoherent belief connection.

Based on these observations, coherence can indicate whether it is possible for a person who holds a particular belief to also hold a particular other belief in a certain belief context.11 This can be evaluated from a perspective that is external to the subject who believes certain things to be connected (or even by the subject himself, as he considers the beliefs he holds). The examples above suggest that, while belief connections in certain belief contexts are not limited to true beliefs, they cannot contain opposite types of beliefs whose propositional contents address the same thing: a person can believe that dogs can fly and that dogs have wings (example 1), but it is not possible for a person to believe that dogs can fly and that they do not fly (example 2). The first example is a coherent connection between two false beliefs about the same thing,12 and the second is an incoherent connection between a false and a true belief about the same thing.

Absence of Connection

Apart from being connected coherently or incoherently, particular beliefs can also be considered unconnected. This emphasizes that beliefs are context dependent, even though all beliefs are embedded in mental processes and may therefore be considered connected on a more general basis. Take the example of B5 “I believe that Germany is in Europe” and B6 “I believe that fish live in water.” By themselves, B5 and B6 do not address anything by which they could be considered connected. Another example are the beliefs B3 “I believe that dogs cannot fly” and B7 “I believe that John is the son of Jack and Pamela,” which by themselves do not address anything that allows us to consider these beliefs connected, either.

Whether particular beliefs are connected is subject to their belief context. It is possible, and indeed quite common, that all the beliefs somebody holds include beliefs that are contradictory. For example, I may hold the true belief that Alexander is wearing a green shirt in a belief context about my meeting with Alexander on Monday, and hold the true belief that Alexander is wearing a yellow shirt in a belief context about my meeting with Alexander on Tuesday. Considered in the same belief context, or by themselves, the beliefs that Alexander is wearing a green shirt and that Alexander is wearing a yellow shirt are contradictory, or incoherent. However, since they are related to different belief contexts addressing different situations, they can be considered unconnected rather than incoherent.

Directedness

Directed belief connections address objects whose characteristics can be considered logically dependent on each other. Specifically, a characteristic of an object can be considered logically prior to another characteristic of another object, or to the same object. Consider the connection between two true beliefs, describing something that is verifiable in the external world (see Figure 4):

B1 I believe that my glass of water fell to the floor.

B2 I believe that the floor is wet.

In this example, B1 addresses something that can be considered a logical antecedent (water falling to the floor) of what is addressed by B2 (wet state of floor). Conversely, what is addressed by B2 can be considered a logical consequent of B1. This can be represented as B1 → B2.

Figure 4. Example of a directed belief connection I.

Figure 5. Example of a directed belief connection II.

Consider another example about the connection between two true beliefs that describe something that is verifiable in the external world (B1) and addresses an internal sensation (B2) (see Figure 5):

B1 I believe that I ran 10 kilometers.

B2 I believe that my muscles are sore.

In this example, B1 also addresses something that can be considered a logical antecedent (running 10 km) of what is addressed by B2 (sore muscles). This can also be represented as B1 → B2.

Possibility Versus Necessity

Such belief connections indicate possibilities rather than necessities. Specifically, there are other possible antecedents of a floor’s being wet or muscles’ being sore (hence the terminology “logical antecedent” and “logical consequent”). Indeed, what is described by B2 (in both examples) may in reality be the consequence of something else, for example a person cleaning the floor (example 1) or climbing ten flights of stairs (example 2). This indicates the limits of human knowledge, which also become relevant later in this chapter (see section on External Interventions on Cognitive Maps). Here, it suffices to note that regardless whether what is described by B1 is the real antecedent of what is described by B2, it is possible for a person to consider B1 an antecedent of B2.

Temporality

Belief connections often represent common-sense temporal connections between physical things, and it might be tempting to think of directedness in terms of temporal structures.13 However, it is important to recall that all belief connections exist in human minds, and not in the external world where things unfold in time. Belief connections may give a cognitive account of time, but no chronological account of time. Cognitive accounts of time indicate people’s understanding about how things happen in time, and are not to be confused with the unfolding of time itself. Instead, they show how individuals at certain points in time believe time to be unfolding. It is therefore misleading to think of belief connections as representations of the chronological order by which things unfold.

Figure 6. Logical versus chronological order of beliefs.

On another level, the unfolding of time itself can be misleading for understanding certain phenomena or behavior. This becomes obvious from the following example of the connection between two true beliefs, based on Pearl’s Causality (2000: 252). It shows that the chronological order may differ from the logical order addressed by the beliefs (see Figure 6):

B1: I believe that it is raining.

B2: I believe that the barometer is falling.

In this example, what is addressed by B1 (rain) can be considered a logical antecedent of what is addressed by B2 (falling of the barometer). Again, this can be expressed as B1 → B2. However, the temporal order of the propositional contents of these beliefs cannot be considered in the order B1 → B2. According to the temporal order, B2 (falling of the barometer) is prior to B1 (rain), which translates into the opposite B2 → B1. This order contradicts the logical order, as rain is not a consequence of the falling of the barometer.

Directedness Implies Coherence

Directedness implies coherence, because considering something to be logically prior to something else implies that the two can be considered logically consistent. For example, B1 and B2 “I believe that dogs have wings” and “I believe that dogs can fly” described earlier as having a coherent connection can also be considered to have a directed connection, so that B1 → B2.

Figure 7. Overview of belief connections.

On the other hand, not every coherent belief connection can be considered directed. Take the example of the belief connection between beliefs B1 “I believe that the street is wet” and B2 “I believe that I am wet.” B1 and B2 can be considered logically consistent, because both address the state of being wet. However, B1 cannot be considered a logical antecedent of B2, or vice versa.

Figure 7 indicates this relationship between coherent and directed belief connections. It also includes unconnected beliefs.

Belief Systems

As described earlier, cognitive maps are illustrations of belief systems. Belief systems provide in-depth insight into the mechanisms underlying human behavior, such as political violence. My study examines belief systems to show how humans can reach decisions to take up arms against their state, or to refrain from doing so (Chapters 46). Providing the framework for this analysis, this section lays out the basic structure of belief systems. The next section deals with the semantics and some structural aspects specific to belief systems related to political violence.

Belief systems consist of belief connections. Belief connections follow certain rules, and belief systems therefore offer a consistent method to trace the microlevel mechanisms motivating human behavior. There are two types of belief connections: direct belief connections, which I have discussed above, and indirect belief connections, which consist of more than one direct belief connection. The indirect connections between beliefs can be called chains of beliefs. Those that include directed (rather than only coherent) belief connections can be called directed chains of beliefs. They can be represented in the following way:

Directed Belief Chain: B → B → B → B → B

Each belief system includes at least two belief chains. Inside a belief system, every belief is directly connected to at least one other belief and indirectly connected to all the other beliefs of the system. In the system, each belief chain shares at least one and at most all but one belief or belief connection with another chain. In principle, belief systems can involve an infinite number of belief chains. Moreover, they can address various types of factors (see “Belief Typology”). Accordingly, belief systems can be highly complex.

If belief systems are considered unlimited, it is not immediately obvious how to identify particular belief systems, such as those related to violence. It is therefore helpful to note that cognitive scientists more or less generally assume that beliefs are context dependent (Österholm 2010: 41). This suggests that belief systems can be limited by reference to certain contexts.14

If belief structures are limited, it is possible to identify particular belief systems and examine these in their entirety. In such systems, it is possible to identify various types of beliefs by referring to their position in the structure (rather than based on the issues they address). This significantly advances the analysis. Specifically, it allows the researcher to identify the logical order between connected beliefs and to systematically explore what is logically prior to certain behavior. In particular, this can be achieved by identifying three types of beliefs:

Pure Antecedents: beliefs that are only logical antecedents and never logical consequents of another belief in the system;

Intermediate Beliefs: beliefs that are both logical antecedents and logical consequents of other beliefs in the system;

Pure Consequents: beliefs that are only logical consequents and never logical antecedents of other beliefs in the system.

Figure 8 presents a simple example.


Figure 8. Example of a simple belief system.

These beliefs identify the beginning, middle, and end of belief chains. They indicate how reasoning processes begin, proceed, and end. Since beliefs address what motivates human behavior (see “Belief Typology”), pure antecedents, intermediate beliefs, and pure consequents provide in-depth knowledge about the underlying structures of human behavior. Specifically, pure antecedents can indicate what triggers the reasoning processes motivating certain behavior. Intermediate beliefs can show what constitutes the micro-level mechanisms underlying the behavior. Finally, pure consequents can identify the behavior itself. In the following analysis, pure antecedents and intermediate beliefs identify the microlevel mechanisms motivating people to engage in political violence, whereas pure consequents identify the behavior itself.

Belief Systems Related to Political Violence and Nonviolent Activism

This book investigates belief systems about political violence. Political violence is a particular type of behavior. Given the discussion so far, it is not immediately obvious how a type of behavior can be addressed by beliefs. It is also unclear what the belief contexts of beliefs about political violence may be. This section clarifies these points. Specifically, it shows how political violence can be represented by beliefs, and provides thoughts on possible belief contexts motivating beliefs about political violence.

Beliefs Addressing Violent and Nonviolent Activism

There are two types of beliefs that can address political violence or nonviolent activism: beliefs that address things that have a material existence in the external word (Type 1) and beliefs that describe abstract ideas that may be observable (Type 3). There is an additional type of belief I discuss in the following section: beliefs that address decisions to perform certain actions.

Beliefs of Type 1 address something that has a material existence in the external world. They are based on people’s ability to store in memory the things they observe in the world. Drawing on Alfred Schütz, it is not only possible to store those things in memory but also to generalize them into types (in the order thing → type) (Schütz 1973, drawing on Max Weber’s ideal types).15 Types are generalizations that indicate not only the “factual existence” of things but their “typical being-thus-and-so” (230). Their configuration establishes meaning-contexts in which certain things can be understood and meaning be imposed on entire situations. For example, “four-footed, wags its tail, barks” establishes a meaning-context that, together with a “theme” such as “bites,” can provide a meaning structure for a situation where somebody is bitten by a dog (231).

The typology provided by Schütz provides helpful insight about how beliefs can address political violence or nonviolent activism. Specifically, it suggests that beliefs can address certain things in the world that have a configuration of “typical beings” that can be generalized into “political violence” and “nonviolent activism,” in the order things → types → political violence; things → types → nonviolent activism. In the Introduction, I defined three types of things that together can be considered “political violence”: (1) application of physical force, (2) civil perpetrator, and (3) state target. I have also defined three types of things that together can be considered “nonviolent activism”: (1) application of a means that is not physical force, (2) civil perpetrator, and (3) state target. The first type can be a thing like bombing, shooting, hitting (political violence), or protesting, participating in a sit-in, or writing a newspaper article (nonviolent activism). The second type can be a “thing” like Muslim Brother or a member of German Socialist Student Union (violent and nonviolent activism); the third type can be a “thing” like the prime minister or the building of parliament (violent and nonviolent activism).

All these things can be addressed by the propositional contents of beliefs. The following are some examples:

B1 “I believe that a man is shooting the prime minister.”

B2 “I believe that a group of people is planting a bomb in the Ministry of Interior.”

B3 “I believe that a person is beating up a policeman.”

These examples show that beliefs can represent various things belonging to the mentioned types. They also show that belief systems representing certain things of the same type might at first sight appear very different from each other—for example, systems with beliefs about planting a bomb versus systems with beliefs about hitting, or systems about shooting versus systems about throwing stones. This indicates that the typology allows the researcher to generalize these seemingly different things and to identify belief systems about violent and nonviolent activism that are comparable (Chapter 4 is dedicated to this task).

Table 6 gives an overview of the configuration of the types and examples of things that can be generalized into violent and nonviolent activism. It is moreover possible that beliefs address violent and nonviolent activism as types, rather than as things or configuration of types. Type in this sense adds a level of abstraction to the terminology used above (so that things → subtypes [formerly types] → types [political violence, nonviolent activism]) and indicates that it is also immediately possible for humans to abstract from the things they see to identify belief systems about “political violence” and “nonviolent activism.” Some examples of beliefs that address political violence or nonviolent activism as types are

Table 6: Beliefs Representing Violent and Nonviolent Activism


B4 “I believe that there is political violence.”

B5 “I believe that Ahmed engages in nonviolent activism.”

B6 “I believe that some people engage in violent and nonviolent activism.”

Since types are abstractions of things, the beliefs that address violent or nonviolent activism as types are beliefs that address abstract ideas (beliefs of Type 3).16

Belief Contexts Related to Beliefs About Violent and Nonviolent Activism

Belief systems also allow the study of the belief contexts17 connected with the beliefs addressing violent and nonviolent activism.18 The following is an example of a belief chain addressing political violence in a certain context:

B1 I believe that Peter is facing a person in a state uniform.

B2 I believe that Peter is shouting at the person.

B3 I believe that the person is shouting back at Peter.

B4 I believe that Peter is hitting the person in the state uniform.

The belief context is represented by B1, B2, B3, which represent a situation in which a civilian has a quarrel with a state employee. This is recognizable from the following “things” that can be abstracted into types:

1. thing: “Peter” (B1, B2, B3, B4) type: civilian
2. thing: “shouting” (B2) and “shouting back” (B3) type: quarrel
3. thing: “person in state uniform” (B1) type: state employee

This belief context moreover has a particular structure: the belief connections between B1, B2, and B3 are directed. Specifically, B3 can be considered logically prior to B2, because the person’s shouting back at Peter can be considered to indicate that Peter shouted at the person first; and B2 can be considered logically prior to B1, because Peter’s shouting at the person presupposes he is faced with a particular person, as indicated by B1. These belief connections can be expressed as B1 → B2 → B3. B4 indicates something that can be considered to represent political violence. In particular, it addresses the following things that can be abstracted into the configuration of types that can be called political violence:

1. thing: “is hitting” type: application of physical force
2. thing: “Peter” type: civil perpetrator
3. thing: “the person in state uniform” type: state target

With the exception of “is hitting,” B1, B2, and B3 address the same things as B4, which indicates that B1, B2, and B3 can be considered a belief context of B4. At the same time “is hitting” (B4) allows the generalization of “Peter” and “the person in state uniform” into slightly different types: civil perpetrator (rather than “civilian”) and state target (rather than “state employee”). B1, B2, and B3 can moreover be considered directed toward B4. Specifically, B3 describes something that can be considered to encourage Peter to hit the state employee—“shouting back.” This relation between B3 and B4 can be expressed as B3 → B4. Since B3 is in turn the logical consequence of B2, which is in turn the logical consequence of B1, the entire chain can be expressed as B1 → B2 → B3 → B4.

Since these beliefs address things that have a material existence in the external world, belief systems about violent and nonviolent activism may appear to consist of true beliefs, and to be intersubjective. However, it is important to note that the belief contexts of violent and nonviolent activism may not consist of true or intersubjective beliefs, and instead include religious beliefs, moral beliefs, or even incorrect beliefs. For instance, the example above could contain additional beliefs B1* “I believe that a witch told Peter to shout at the person in state uniform,” B1** “I believe that the person in state uniform is afraid,” or B1*** “I believe that the person in the state uniform believes that it is wrong to hit somebody.”

Belief systems about violent and nonviolent activism may therefore not entirely consist of true beliefs or intersubjective beliefs. However, they must include the mentioned true beliefs addressing violent and nonviolent activism. In general, belief systems have an ultimately subjective dimension by being held by particular individuals. On the other hand, what is addressed by them has an ultimately objective dimension if it includes things that have material existence in the world.19

Decisions

In the previous section, I have shown that political violence can be represented by beliefs. In particular, I have explained that political violence can be addressed as beliefs of Type 1 that can be generalized, or, on a more abstract level, as beliefs of Type 3. Throughout the discussion, I have presented examples of beliefs of people who observe rather than engage in political violence. What has therefore not been explained so far is how studies can be conducted from the perspective of the people who engage in political violence. This section is devoted to this task.

Examining political violence via belief systems involves the question how people’s belief systems are connected to their actions. This connection can be established by particular beliefs about intentions to perform certain actions. I call these beliefs decisions.20 In the following section, I introduce decisions. Decisions connect beliefs to behavior, which shows how cognitive mapping can bridge the gap between actors and external structures.

Internal Structure

Decisions involve intentions. Intention is based on the notion of intentionality. Intentionality in the sense introduced by Franz Brentano refers to a “mental phenomenon” that “is characterized by … what we might call … reference to a content, a direction towards an object (which is not understood here as meaning a thing)” (2015: 92). This reference is also contained by my earlier definition of beliefs, which treats beliefs as mental states related to certain objects. Specifically, intentionality addresses the same relation between mental states and objects: mental state → object.

Every belief involves intentionality. Beliefs that are decisions moreover directly address an intention and a particular object: actions.21 Following the internal structure of intentionality (mental state → object), decisions involve a directed connection between the intention and the action in the order intention → action. This can be considered to establish a directed connection between the subject and the action. This can be expressed as

INTENTION → OBJECT

subject → action

Furthermore, actions can be considered interventions of the subject on the external world. This means that they involve an additional directed connection toward the world, so that action → world. Decisions then include two directed connections between the subject, the action, and the world. The first arrow represents the directedness of the intention, and the second that of the intended action:22 subject → action → world.

Planning

Intentions indicate that the subjects who consider an action also carry out the action. My understanding of this follows Michael Bratman’s definition of intentions, by which they contain (partial) plans to perform actions (1987). Planning can be considered a mental state in which the subject is in control of an object. This subject-object connection is stronger than assigning certain properties to an object because it addresses an intervention of the subject on the world. This means that decisions can be considered as beliefs that involve stronger mental states than other beliefs introduced earlier.

External and Internal Obstacles

Planning an action also has a temporal dimension: it is directed toward the future, in which the action will be conducted (or in which it will be continued).23 This indicates that decisions are temporally prior to actions. It also indicates that deciding to perform an action is not the same as performing the action.24 In the words of John Searle, there is a “gap” (2001: 61).

As a result, it is possible that even though people decide to perform certain actions, they do not actually do so. This could be the case because of external obstacles that prevent the performance of the action—an example is the failure of the detonation of the bombs placed on German trains in 2006. It could also be the case because of internal obstacles, such as people obtaining new knowledge on which they form different intentions in favor of different actions, or people suffering from weakness of will (Searle 2001).

According to Searle, the primary feature of “the gap” is, however, not temporal. Rather, the gap indicates that “we do not normally experience the stages of our deliberation and voluntary actions as having causally sufficient conditions or as setting causally sufficient conditions for the next stage” (Searle 2001: 50: 61–96). This emphasizes that, like the connections between beliefs (see “Belief Connections”), the connections between decisions and behavior are not causal—once a decision occurs, it does not necessarily translate into behavior.

Table 7: Decisions, Actions, Intentions, and Self-Knowledge


Self-Knowledge

Planning also implies self-knowledge, that is, the subject knows he is planning to do X (one cannot plan something without having knowledge about what it is that one is doing). Since political violence involves high risks, the actors studied in this book actually planned their actions and have self-knowledge. However, on a more general level, it is not necessary that subjects plan their actions. For instance, my shaking of somebody’s hand may not be planned or include self-knowledge (but only self-awareness). Other actions, such as my turning right on my way to work may neither be planned nor include self-knowledge (and not even self-awareness). In fact, subjects may not even have self-knowledge or self-awareness related to their own beliefs, as is suggested by the impossibility of calling upon all the beliefs that one holds when asked to do so.

Given these considerations, there seem to be numerous actions that are not planned. Since I define intention by reference to planning, such actions do not involve intentions, which means that such actions do not involve decisions, either. Consequently, there can be actions without decisions. Table 7 provides a summary.

Desires

Planning also suggests that somebody wants to do something, which is often understood as an indication of desire. Basically, this addresses the question what is logically prior to the intention, whether desire → intention → action.25 Based on this, it can be questioned whether intentions are mental states.26

Wanting may but need not indicate a desire; someone may want to perform an action but not have a desire to do so. An example is the following sentence, which contradicts my dislike of cleaning up and suggests that intentions can be considered mental states, rather than desires: “I want to clean up my room.” Since people can believe they have feelings, which include desires (see beliefs of Type 5), it is nevertheless possible that intentions are ultimately based on desires. Here, it is helpful to consider another example, which corresponds to what I like after traveling for a long period of time: “I believe that I want to go home.” In this example, wanting may indeed be understood to indicate a desire—however, by saying “I want,” it is possible for me to describe a desire in the propositional content of a belief. This shows that people who feel desires can describe these desires and believe they feel these desires—which suggests it is helpful to not treat intentions as desires. It also supports the view that intentions are plans in which people have some kind of mental control over something, such as desires. In the last example, for instance, it would have been possible for me to plan to perform another action that does not correspond to my desire—or to do nothing.

Goals

The previous section related political violence to three observable things: (1) means (physical force), (2) perpetrators (civil), and (3) target (state). What has not been addressed, however, is the mental component of political violence, its goals. As described in the Introduction, there is consensus that people do not engage in political violence for the mere sake of using physical force. Rather, political violence is a type of behavior thought to involve goals.

Goals are particular types of beliefs that may motivate certain decisions, such as decisions to take up arms. Like decisions, goals establish a connection between the actor (who believes in certain goals) and behavior (the behavior the actor engages in related to his goals). For example, one can have a goal of following God’s will (Belief Type 4, “I believe that my goal is to follow God’s will”) underlying one’s praying. One can also have a goal of stopping the government from attacking its citizens (Belief Type 1, “I believe that my goal is to stop the government from attacking its citizens”) underlying one’s leaving a demonstration. Or one can have a goal of fulfilling a certain desire, such as the goal to be happy (Belief Type 5, “I believe that my goal is to be happy”) underlying one’s going on vacation.

Goals are beliefs about what the subject considers the desired consequence of his action, so that action → goal achieved. This is opposed to performing the action for the sake of performing the action, so that action → action achieved. This structure underlines that people do not take up arms for the mere purpose of engaging in violence. Rather, their actions involve certain goals. It is important to note that whether one’s goals are achieved can be evaluated only after the action has been performed. As a result, goals imply a forward-looking dimension that transcends both decisions and actions.

Conclusion and Outlook

This section and the previous ones introduced decisions to engage in certain behavior, such as political violence, and the beliefs connected to such decisions. The following discussion applies these ideas by modeling political violence as decisions to take up arms based on chains of interconnected beliefs. In the next chapters, I identify these chains of beliefs by coding the actors’ direct speech for decisions to take up arms, as well as for other beliefs related to these decisions. Based on this, I construct cognitive maps that make visible the complex belief systems underlying political violence. I then analyze the maps and identify different types of belief chains that motivate decisions, which sheds light on the complex microlevel mechanisms underlying political violence.

Part II. Formalization and Counterfactuals

Formalizing Cognitive Maps into Directed Acyclical Graphs

Cognitive maps typically contain large numbers of beliefs and inferences. Therefore, it is impossible to systematically analyze them by hand. To cope with this problem, it is helpful to formalize cognitive maps. As shown by Axelrod (1984),27 formal models make traceable processes that would otherwise not be analyzable, or only be analyzable on a much smaller scale. They allow the researcher to systematically explore the reasoning processes represented by cognitive maps.

Based on the literature in graph theory28 and computer science, cognitive maps can be formalized into directed acyclical graphs (DAGs). This offers new possibilities for studying human behavior via the cognitive mapping approach. DAGs are often used in computer science to study structures of variables that are directed and limited (Koller and Friedman 2009; Pearl 2000). The reasoning processes represented by cognitive maps are also directed by involving antecedent and consequent beliefs. They are also limited by involving traceable chains of beliefs that end in decisions. As a result of this similarity, formalizing cognitive maps into DAGs offers a convenient basis for an automated analysis.

In the following, I explain how cognitive maps can be formalized into DAGs. Specifically, I do so by drawing on Judea Pearl’s theory of causality. This formalization also offers new possibilities for the study of counterfactuals, and allows me to explore alternative worlds in which individuals would not have decided to take up arms (see Chapter 6).

Directed Acyclical Graphs

According to Pearl (2000), DAGs are graphs with a particular structure. Graphs are structures with two components:

V = set of variables of vertices

E = set of edges connecting the vertices

DAGs differ from other graphs by being directed and not containing cycles or self-loops (see Figure 9). Directedness means that each edge in the graph is an arrow pointing from one vertex to another. Not containing directed cycles or self-loops means that there are no relationships such as A → B, B → A (cycle), or A → A (self-loop). In this structure, it is possible to trace paths between vertices that are separated by more than one arrow by following the direction of the edges between these vertices (see Figure 9).

According to Pearl (2000: 12), the following labels, taken from graph theory, describe the major components of DAGs:

• All vertices are called parents or children. Parents are the starting points of arrows. Children are the ending points of arrows.

• Vertices that do not have parents are called roots.

• Vertices that do not have children are called sinks.

• Indirect connections between vertices are called paths.

This structure corresponds to the structure of cognitive maps. The similarity between DAGs and cognitive maps is indicated by the terminology used to describe both structures. In particular, Pearl’s graph theory terminology corresponds almost exactly to the belief system terminology introduced in the previous sections. This shows that, although cognitive maps are considered belief systems rather than graphs, it is possible to think of them as DAGs. Table 8 gives an overview. Figure 10 visualizes these elements. The upper part is a cognitive map, the lower part is a DAG.

Figure 9. Structure of a directed acyclical graph.

Table 8: Compatibility of DAGs and Cognitive Maps


Cycles and Self-Loops

In spite of these similarities, there is a feature of DAGs that does not necessarily correspond to cognitive maps in particular, or to the nature of reasoning processes more generally: the absence of cycles or self-loops. Specifically, humans may reconsider certain beliefs before/if reaching a decision. Such reconsiderations may be represented as cycles or self-loops. Nevertheless, recall that all reasoning processes represented by cognitive maps end in decisions. Because of this, they are directed toward decisions, even if they contain cycles or self-loops. Cycles or self-loops in cognitive maps therefore represent reconsiderations only within reasoning processes that end in decisions. They do not change decisions. Based on this, it is possible to formalize cognitive maps into DAGs.29

Figure 10. Compatibility of directed acyclical graphs and cognitive maps. (2 graphs)

Counterfactuals

Following Pearl, formalizing cognitive maps into DAGs allows the researcher to intervene on the actors’ belief systems and explore when they would not have made certain decisions. In Chapter 6, I use this approach to study worlds in which the individuals I interviewed for this research would not have decided to take up arms.

Studies exploring whether people would have behaved differently had the reality been different are called counterfactual studies. In political science, counterfactuals30 have been defined as “subjunctive conditionals in which the antecedent is known or supposed for purposes of argument to be wrong” (Brian Skyrms, quoted in Tetlock and Belkin 1996: 4).31 They are considered to offer a convenient tool to explore whether “things could have turned out differently” (7).

There is a general consensus among researchers from various fields that counterfactual analysis is “unavoidable” to explain phenomena that cannot be studied by controlled experiments that randomize the initial conditions (Tetlock and Belkin 1996: 6). There is, however, no consensus about how to engage in counterfactual analysis.32 Formalizing cognitive maps into DAGs provides a new approach to study counterfactuals.33 Specifically, it allows the researcher to intervene on the actors’ belief systems and test when they would have made different decisions had they held different beliefs. This bridges the gap between actors and structures by intervening on beliefs about the world, rather than on the world itself.

Modeling Change in the External World

External Interventions

To model change in the world, Pearl introduces external interventions. To illustrate this, Pearl draws on a simple DAG, shown below. This DAG represents relationships between the seasons of the year (A), the falling of rain (B), the sprinkler being turned on (C), the pavement being wet (D), and the pavement being slippery (E) (15). Specifically, the DAG shows a directed order from A to E in which the season influences the falling of rain (A → B) and the turning on of the sprinkler (A → C); the falling of rain and turning on of the sprinkler in turn influence the pavement being wet (B → D and C → D); and the pavement being wet in turn influences the pavement being slippery (D → E).

Figure 11. Example of a directed acyclical graph. Pearl 2000: 15.

The directed order from A to E may be described as dependency (e.g., Spirtes 1995). It differs from other orders that do not address directed relationships. For example, consider flipping a coin multiple times: the result of one toss does not depend on the result of the previous toss.

Specifically, there are two types of dependency conditions: (1) conditional dependencies between particular vertices connected by an edge, and (2) conditional in dependence between vertices that are not connected by an edge. For example, given three variables A, B, and C, one can say that A and B are independent if knowing A remains unchanged by knowing B. Formally this can be expressed as a conditional probability statement: P(A|B, C) = P(A|C). On the other hand, one can say that A is conditionally dependent on B if knowing B influences knowledge of A. Formally, this can be expressed as a conditional probability statement: P(A|B) = P(A,B)|P(B).

The DAG above then illustrates the following condition of independence. Knowing that the pavement is wet (D) makes knowing that the pavement is slippery (E) independent of knowing the season (A), whether it rains (B), or whether the sprinkler is turned on (C). In short, knowledge of D establishes independence between E and A, B, C. On the other hand, knowing the season (A), whether it rains (B), or whether the sprinkler is turned on (C) does not make knowing the pavement is slippery (E) independent of knowing the pavement is wet (D). In short, E is conditionally dependent on D. This is the case because knowing the pavement is slippery (E) is directly dependent on knowing that the pavement is wet (E), but only indirectly dependent on knowing the season (A), whether it rains (B), or whether the sprinkler is turned on (C). In Pearl’s (2000: 21) vocabulary, the pavement’s being wet (B) “mediates” between the pavement’s being slippery (D) and whether it rains (B), the sprinkler is turned on (C), and the season (A).

Figure 12. Example of an intervention. Pearl 2000: 23.

Given these observations, Pearl models an external intervention in which the vertex representing knowledge about whether the sprinkler is on is defined as “SPRINKLER = ON.” This is visualized by Figure 12.

This figure shows that intervening on C so that it is known that the sprinkler is on makes it possible to consider the effect of “SPRINKLER = ON” without considering A → C. In the figure, this is shown by the deletion of the arrow between A and C. Formally, this can be expressed by a change in the probability distributions representing this DAG. The probability distribution of this DAG before the intervention (Figure 11) can be represented as

P(A, B, C, D, E) = P(A) P(B|A) P(C|A) P(D|B, C) P(E|D).

The probability distribution of this DAG after the intervention (Figure 12) lacks P(C|A) due to knowledge of C and can be represented as

PC=On(A, B, D, E) = P(A) P(B|A) P(D|B, C=On) P(E|D).

The removal of A → C [P(C|A)] from the probability function is possible, because knowing C (that the sprinkler is on) makes it unnecessary to consider whether C and A (season) had an influence on C, as indicated by A → C. In Pearl’s words, “Once we physically turn the sprinkler on and keep it on, a new mechanism (in which the season has no say) determines the state of the sprinkler” (23).

Drawing on such interventions, it becomes possible to study change in the external world. Specifically, it becomes possible to intervene on particular vertices that represent certain states of the world. Related to cognitive maps, it becomes possible to intervene on particular beliefs. As I show below, this allows me to explore when individuals would not have decided to take up arms.

Causal Relationships

Before relating external interventions to cognitive maps, it is important to note an important underlying assumption of external interventions—namely, that the edges in DAGs represent causal relationships. As Pearl observes, it is not possible to model external interventions by relying exclusively on probabilistic models.

In this context, Pearl argues that DAGs by nature represent causal rather than probabilistic relationships, that their edges indicate “a stable and autonomous physical mechanism” (22). Concerning the example above, he says that the directed order from A to E is established by “causal intuition” (15). Following this intuition, one understands that the season influences the falling of rain (A → B) and the turning on of the sprinkler (A → C), that the falling of rain and turning on of the sprinkler in turn influences the pavement being wet (B → D and C → D), and that the pavement being wet in turn influences the pavement being slippery (D → E).

According to Pearl, cause-effect connections of such physical mechanisms are so strong that it is “conceivable to change such [a connection] without changing the others” (22; emphasis in original). This allows Pearl to model an external intervention by defining a particular vertex as a particular state or thing, and to trace the effect of this intervention on what else is represented by the DAG (as indicated by the remaining vertices). Accordingly, it is no longer necessary to specify a new probability function that represents the impact of each intervention on all the other vertices. Instead, the external intervention requires only a “minimum of extra information” (22).

External Interventions on Cognitive Maps

Based on the similarity between DAGs and cognitive maps, it is possible to apply Pearl’s external intervention to cognitive maps. This allows me to study when individuals would not have decided to engage in certain behavior, such as political violence (see Chapter 6). At this stage, however, it is important to note that cognitive maps represent belief systems, whereas Pearl’s examples refer to physical mechanisms in the world. Therefore, it is not immediately obvious that external interventions can be applied to cognitive maps. In the following, I show that this is nevertheless the case, and that belief connections can be modeled as if they were causal.

Belief Connections Can Be Modeled as if They Were Causal

Pearl’s example about the sprinkler implies that it is principally possible to perform external interventions on cognitive maps. This is the case because the example in which he performs his external intervention can itself be considered a cognitive map: Pearl treats the vertices of this DAG as knowledge, or true beliefs. Specifically, he treats them as knowledge about what season it is; whether the sprinkler is turned on; whether it rains; whether the pavement is wet; and whether the pavement is slippery. If the vertices of this DAG can be considered true beliefs, they can also be considered beliefs; and if the vertices can be considered beliefs, the edges between the vertices can also be considered belief connections.

Pearl’s example therefore shows that one can model external interventions on cognitive maps by changing particular beliefs that are knowledge.34 However, it is not clear whether the connections between the true beliefs on which the external intervention is performed and the remaining beliefs can be considered as Pearl proposes (deleting A → C; tracing effect of C = SPRINKLER ON). This is due to two reasons: (1) the remaining beliefs of cognitive maps may not be true beliefs as in Pearl’s example, and (2) by their nature all belief connections have a subjective dimension (see section on Belief Connections), and Pearl’s treatment of edges as stable and autonomous physical mechanisms does not apply immediately.

Accordingly, it is helpful to consider the purpose of Pearl’s external intervention, which is to deal with the uncertainty of human knowledge about cause-effect relationships in the external world. Specifically, Pearl’s external intervention overcomes this uncertainty by defining certain things to be. Upon knowledge about the existence of these particular things, their causes become irrelevant for considering their consequences; and their consequences can in turn be considered from additional knowledge about the connections between the things that are defined to be (causes) to other things (effects).

Table 9: Causal and Logical Connections


Relating this to belief connections, the major issue is not whether there is uncertainty about what causes certain things in the external world. Belief connections consist of logical antecedents and consequents, and represent possibilities rather than necessities.35 These connections are by their nature less strong than cause-effect connections, but they have the same structure: both express directed relationships in which certain components depend on others (see Table 9, column “Directedness”). Based on this structural similarity, it is possible for belief connections to represent cause-effect connections, even though they are not cause-effect connections themselves—as demonstrated by Pearl’s sprinkler example, involving a DAG with vertices representing knowledge about the world. On the other hand, they may also represent purely logical connections including beliefs that are not knowledge—whose internal structure is the same.

This structural compatibility of belief-belief and cause-effect connections suggests that, although belief connections are logical, it is nevertheless possible to model them as if they were causal. In the following section, I show how this can serve the systematic study of alternative worlds in which actors would not have decided to engage in certain behavior.

Extending External Interventions to Beliefs That Are Not Knowledge

Given that cognitive maps can be modeled as if they were causal, it becomes possible to intervene on different types of beliefs, including beliefs that are not knowledge. For example, one can also intervene on religious beliefs, or on beliefs about feelings. This offers new possibilities for the study of counterfactuals by intervening on internal rather than external factors. In other words, it becomes possible to study counterfactuals that include actors with different internal worlds. This study does not pursue this avenue, as external factors are identified as mattering more than internal factors in relation to political violence. However, other studies might pursue this avenue to develop deeper insight into other phenomena.

Counterfactual Model

Based on the previous section, it is possible to extend Pearl’s external interventions to cognitive maps, and to use the cognitive mapping approach to study counterfactuals. In order to do so, it is necessary to consider the main components by which Pearl formally defines counterfactuals:

• A Causal Model that represents the entire structure on which the counterfactual will be modeled.

• A Submodel that represents only the change that is made to the model when introducing an external intervention.

• The Effect and Potential Response that represent what follows from the external intervention in the model.

• The entire structure of the Counterfactual resulting from the external intervention.

Figure 13 illustrates these components, building on my earlier presentation of external interventions. The figure is divided into two parts. The upper part illustrates a cognitive map before an external intervention (Pearl’s causal model). The lower part illustrates the cognitive map after an external intervention is performed (Pearl’s submodel, effect, potential response, and counterfactual). Note that it is only possible to identify the submodel, the effect, the potential response, and the entire counterfactual after introducing the intervention.

Pearl’s definitions for the main components of counterfactuals are presented in the Appendix. To follow the analysis in this book, however, it is not necessary to read the Appendix. Instead, I explain in Chapter 6 how the computational model developed for this research applies Pearl’s theory to model counterfactuals.

Other Theories of Counterfactuals

Pearl’s theory of counterfactuals makes a significant contribution to the existing literature on counterfactuals. As Pearl writes, using external interventions to model counterfactuals has major advantages over other theories of counterfactuals (238–40), first addressed by David Hume and later presented by John Stuart Mill, David Lewis, or Saul Kripke. Specifically, Pearl’s theory differs from the works of these authors by focusing on the processes by which counterfactuals are constructed.

Figure 13. Modeling counterfactuals. (2 graphs)

There is a vast body of literature about theories of counterfactuals, particularly in the field of philosophy, and the following paragraphs can by no means give a complete overview or analysis. Rather, my aim is to briefly present some of the main features of this literature and identify some of the major contributions offered by Pearl’s approach. Following Pearl’s own references, this section addresses Lewis’s theory of counterfactuals, to which various works in the study of political science refer (e.g., Fearon 1991; Sylvan and Majeski 1998). In addition, the section addresses Hume, because he was the first researcher to explicitly address counterfactuals, and Kripke, whose theory of counterfactuals draws on the work of Lewis; Kripke’s theory has also been applied to the study of political science (Sylvan and Majeski 1998) and computer science (Peralta, Mukhopadhyay, and Bharadwaj 2011).

According to David Hume, knowledge about cause and effect is available to humans from their experience (rather than from reasoning by itself), in which they frequently find that certain things are conjoined with each other (regularity account of causation) His definition of causation, which was the first to directly address counterfactuals, is (1772: 90; also quoted in Pearl 2000: 238): “we may define a cause to be an object followed by another, and where all the objects, similar to the first, are followed by objects similar to the second. Or, in other words, where, if the first object had not been, the second never had existed.”

Following Hume, causality and counterfactuals appear to be connected to the regularity of observations. However, this overlooks that counterfactuals are by nature not observed. As Peter Menzies has written (2014): “It is difficult to understand how Hume could have confused the first, regularity definition with the second, very different counterfactual definition.” Pearl understands this definition to indicate that “Hume must have felt that the counterfactual criterion is less problematic and more illuminating” than the regularity account itself (238), and observes that Hume, who never explored counterfactuals more deeply to present a theory about them, does not acknowledge that counterfactual statements are by nature more complicated than causal statements.

Perhaps the most famous contribution to the study of counterfactuals was presented by Lewis in 1973. In his theory, counterfactuals are considered possible worlds—a term that immediately indicates that Lewis treats counterfactuals as possible, while Pearl models them as if they were real. Lewis’s possible worlds are different from the real world and can be evaluated to have more or less similarity to the real world. Information about this similarity is measured by the truth condition that compares different possible worlds with the real world.36

Following this understanding, Lewis treats counterfactuals as entirely separate from the real world, that is, to have an independent existence that may be similar to the real world.37 By contrast, Pearl treats counterfactuals as based on the same mechanisms as the real world. In Pearl’s approach, counterfactuals are never entirely separate from the real world. On the one hand, there could be counterfactuals that are exactly like the real world with the only exception that they include one different “thing” (the variable on which the intervention is performed); on the other hand, it is also possible that there are counterfactuals where every “thing” is different from the real world, but their causal connections are the same as in the real world. Accordingly, Pearl’s counterfactuals include different antecedents but the same consequences that exist in the real world, as well as different antecedents and consequences than the ones that exist in the real world. Like Lewis, however, they do not include the same antecedents and consequences that exist in the real world.

Lewis, moreover, defines causation based on counterfactuals, not counterfactuals based on causation like Pearl. He does so by defining (1) causal dependency between two different possible events, (2) truth conditions for causal dependence that reflect the real world, and (3) a causal chain that consists of a sequence of events. Lewis’s specific definitions have been discussed in a large body of literature and far exceed what can be offered here. Rather, his procedure by itself indicates that, instead of intuiting directly causal connections in the world like Pearl, Lewis approaches causality by looking at what the world is not. Accordingly, counterfactuals are a way to understand the real world and involve looking backward into the past, based on which one knows that certain things in the real world exist. This proceeds in the following way: (1) real world: consequence → (2) modified real world: counterfactual consequence → (3) search for counterfactual antecedents. Pearl’s approach instead directly looks at the mechanisms that make up the world as it is, so that his modeling of counterfactuals can be expressed as (1) real world → (2) modified real world: counterfactual antecedent → (3) search for consequences of counterfactual antecedents.

The works of Saul Kripke present another account of counterfactuals, based on the semantic analysis of modal logic (1980, 1963).38 Kripke’s account also considers counterfactuals as possible worlds, but rejects the notion that every possible world is entirely different from the real world.39 In this sense, Kripke’s work is closer to that of Pearl. Specifically, Kripke treats the connections between things (represented by functions) to identify different worlds, and treats different worlds as being connected to each other by certain components (represented by necessary propositions). By contrast, Pearl treats the connections between things in the real world and counterfactuals to be the same (causal mechanisms), and counterfactuals to be identified from the modification of particular things in the real world (external interventions). This difference is indicated by contrasting Kripke’s definition of worlds with Pearl’s definition of counterfactuals and the reality: Kripke draws on binary functions that have the output of truth conditions, whereas Pearl draws on directed functions that identify parents and children. Kripke’s account of counterfactuals therefore allows the exploration of particular propositions that are only true in certain as opposed to all worlds, whereas Pearl’s work allows investigation of the effects that particular propositions that are not true in the real world would have on the reality.

Conclusion

This chapter has introduced the cognitive mapping approach, which I apply to study the question why some individuals decide to take up arms while others, who live under the same conditions, engage in nonviolent activism instead. Responding to the abandonment of the cognitive mapping approach in the field of political science, I have presented a formalization that allows researchers to systematically analyze cognitive maps. Specifically, I have formalized cognitive maps into DAGs, which makes it possible to develop computational models that process cognitive maps, and presents new possibilities for the application of cognitive maps in political science.

My formalization is based on Pearl’s theory of causality. It provides new possibilities not only for application of cognitive mapping but also for the study of counterfactuals. Specifically, it suggests how intervening on the belief systems of political actors allows us to explore their behavior in alternative worlds, or in the reality in which they hold different beliefs about religion, or other factors that are not knowledge. This new approach to the study of counterfactuals intervenes on the actors’ beliefs about the world rather than the world itself.

Whether to Kill

Подняться наверх