Читать книгу Haptic Visions - Valerie Hanson - Страница 8

Оглавление

1 Imaging Atoms, Imagining Information: Rhetorical Dynamics of the Scanning Tunneling Microscope

A curious thing happened to scientific concepts of atoms around the time that D. M. Eigler and E. K. Schweizer published the “IBM” images. Discussions of atoms drifted from the dominant quantum-mechanical perspective in which individual atoms cannot be measured because, as quantum physicist Erwin Schrödinger explains, “The individual particle is not a well-defined permanent entity of detectable identity or sameness” (qtd. in Regis 155). According to this view, atoms do not occupy one place at one time, do not have boundaries, and cannot be measured singly: atoms can only be measured in collective quantities. After the 1980s, however, scientists studying atoms returned to talking about atoms as individual, bounded entities—similar to how Isaac Newton envisioned atoms, although with a twist: now atoms were manipulable, almost tangible. The shift towards conceptualizing atoms as manipulable has important consequences for how we understand and move within the world around us. The shift towards the manipulable atom also forms a rich site for examining rhetorical practices in technologies, scientific fields, and the cultures in which the technologies and fields exist.

Why the shift towards the manipulable atom happened is not entirely clear. In a popular history about the early development of nanotechnology, Ed Regis contends that the events that helped spur what he calls a paradigm shift in the concept of atoms in the early 1990s include Richard Feynman’s 1960 speech “There’s Plenty of Room at the Bottom” and Robert Van Dyck, Philip Ekstrom, and Hans Dehmelt’s capture of individual electrons in 1976 (Van Dyck, Ekstrom and Dehmelt 776). However, Regis and others also suggest that the scanning tunneling microscope (STM), and related scanning probe microscopes, played a part. Science studies scholar Jochen Hennig, for example, argues that the STM has occasioned a shift in the definition of atoms (“Images”). Regis claims that Eigler and Schweizer’s “IBM” atom manipulation proved that atoms “were in fact and could be treated as mechanical, Newtonian entities. They were objects that could be made to do things” (269).

Scientists also point to the development of the STM and related microscopies when accounting for the shift toward understanding atoms as manipulable, individual entities. For example, one scientist I interviewed in 2005 explained the conceptual change in terms of the development of techniques:

When people first started thinking about surfaces and working on them scientifically, they thought in very atomistic ways, but they didn’t really have techniques to look at them. They just kind of came up with ideas. And then as techniques developed, the techniques tended to be what was called reciprocal space [as opposed to “real space”], so they’re diffraction measurements and people started thinking about periodic structures and then trying to guess what those structures were and compare them to their data. And then when scanning probes [that collect real-space data] came along, two things happened. [One was that] [t]he view flipped back to the atomistic view. . . . 12

In another example from the journal Science, two scientists who use the STM, James K. Gimzewski and Christian Joachim, summarize the impact of STMs on scientists’ relations to atoms: “By the early 1980s, scanning tunneling microscopy (STM) . . . radically changed the ways we interacted with and even regarded single atoms and molecules” (1683). Gimzewski and Joachim’s implication that scientists expect to interact with atoms—so much that interacting with atoms is ranked as more important than regarding atoms—indicates the importance of interaction. In Gimzewski and Joachim’s view, supported also by Eigler and Schweizer’s images among other instances, atoms are not simply solid masses; atoms are also masses with which humans can interact.

The assertion that humans can interact with individual atoms affects scientists working with the STM as well as the development of nanotechnology, as mentioned in the introduction. While rhetorics of arguments that use atom manipulation as justification for developing and funding nanotechnology reveal fascinating dynamics of policy and field formation, the rhetoric of atom manipulation through interaction with the nanoscale also functions at the level of everyday scientific practice, including imaging. The fact that discourses about atoms came to include assumptions of interaction for scientists also prompts further questions for rhetoric. Statements from the scientists and historians mentioned above, for example, suggest the STM is a key player in this change, although the STM did not create the first views of atoms—Erwin Müller first photographed atoms in 1955 with a field ion microscope (Müller and Bahadur; Müller). Whether or not the STM affected the shift towards understanding atoms as manipulable, the fact that the STM is mentioned in discourses about this shift suggests that the STM exerts some influence on scientific practice and discourse.

This chapter explores what is persuasive about the STM and the rhetorics of the STM’s operating dynamics in visualizing the nanoscale, contributing to a partial13 account of how it is that atoms become both visible and manipulable. I identify rhetorical possibilities the STM includes and encourages through identifying and analyzing STM operating dynamics in the context of the STM and broader scientific, medical, and digital visualization trends. I also track the influence of STM dynamics on images, information, and atoms to demonstrate rhetorical links between the operating dynamics and productions of the STM. In so doing, I present (and argue for) a method for studying the rhetorics of visualization technologies that includes analysis of productions and production practices. Insights developed from analyzing STM dynamics individually, and in connection with STM productions, can further inform analyses of STM images as well as the scientific and other discourses to which the images contribute, including analyses found in this book. Before turning to the STM’s dynamics, I demonstrate how scientific and medical instruments make rhetorical contributions to scientific and medical discourses, and articulate the STM’s relationship to other recent scientific visualization technologies to show the significance of the STM for rhetoricians of science, medicine, and digital technologies.

Visualization Technologies as Rhetorical Instruments

Instruments contain much more than springs, circuits, and film, and they are certainly not (pace Bachelard) merely reified theories. Instruments embody—literally—powerful currents emanating from cultures far beyond the shores of a master equation or an ontological hypothesis.

—Peter Galison

Optical devices constitute “points of intersection where philosophical, scientific, and aesthetic discourses overlap with mechanical techniques, institutional requirements and socioeconomic forces.”

— Jonathan Crary

Historian of science Peter Galison and art historian Jonathan Crary’s comments on scientific and optical instruments exemplify some of the ways in which scholars from science studies, history of science and technology, art history, and related fields analyze how instruments develop and operate within complex systems of social and technical knowledge production. However, not many scholars have focused on the rhetorical aspects of instruments within knowledge-producing systems. Instead, most rhetoricians focus on either the productions of scientific or medical instruments (such as specific images or texts associated with knowledge produced by instruments) or the practices afforded by tools or technologies of instruments more directly related to composition (such as the computer).14 However, elaborating on how scholars have studied the functions of such instruments offers support for an argument about how scientific instruments, such as the STM, include rhetorical functions. While researchers using new instruments can use an appeal to novelty to argue for the significance of the instrument, rhetorical appeals based on newness are limited in time and scope. Instruments can function in more potent rhetorical ways and influence the generation and practice of rhetorics that affect discourses surrounding knowledge production.

Instruments as Rhetorical Entities

Scholarship in the history of science, science studies, and art history features arguments for considering instruments as situated within the complexities of practice. For example, scholars within laboratory studies, a subfield of science studies, have generated ethnographies of the work of scientists that situate instruments within everyday practices, and describe how instruments function within complex systems of knowledge production.15 While not focused on rhetoric, studies that situate instruments within the context of practice are also useful for understanding the rhetorical contributions of instruments within the context of scientific practice. Two ways that historians, science studies scholars, and art historians situate and analyze instruments also demonstrate the significance of analyzing the rhetorical aspects of instruments.

First, many scholars of science studies and related fields have used Bruno Latour and Steve Woolgar’s theorization of laboratory work in their classic ethnography, Laboratory Life, to structure analyses that account for the wider cultures that Galison and Crary mention, as well as the daily complexities of scientific practice. Latour and Woolgar conceptualize the goal of laboratory work as the production of inscriptions—textual or visual marks on paper in scientific journals. The practices that produce inscriptions are the work of the laboratory; inscriptions form part of how scientists convince other scientists that statements produced by scientists are scientific fact, are worth passing along, and are worth citing (Latour, “Drawing” 24). Indeed, Latour claims that inscription practices lead towards rhetorical as well as epistemic goals: Significant uses of writing or visualization in science are “those aspects that help in the mustering, the presentation, the increase, the effective alignment, or ensuring the fidelity of new allies” (“Drawing” 24).

Focusing on inscriptions while analyzing scientific practice highlights the rhetorical possibilities of instruments, especially as inscription practices can include devices (such as instruments) as well as communicative methods: material and social practices inform and contribute to scientific knowledge production, and also influence scientific method.16 Some rhetoricians also acknowledge the usefulness of Latour and Woolgar’s concept of inscriptions. Jeanne Fahnestock, for example, calls for rhetoricians to “come to terms with the many techniques of visual inscription used to generate evidence” in order to develop a visual rhetoric of science (“Rhetoric” 284). In a recent article, Chad Wickman uses the concept of inscription to analyze technical laboratory practices and text production, focusing on how visual representations become rhetorical objects in scientific practice (“Observing” 152). While Wickman focuses on visual representations, Latour and Woolgar’s concept could also be used to analyze technical practices used in the laboratory and in image production, with attention to how instruments become part of the rhetorical process inherent in inscription practices.

Second, as part of their function as inscription devices in the laboratory, the operation of instruments requires that users engage in practices specific to the instruments. Over time, instrument users become habituated to the practices necessary to use instruments.17 The fact that users do become habituated experientially to certain practices through instrument use is another reason to look carefully at instruments as rhetorical entities: Instruments help establish repeated, everyday practices of knowledge-making, or may employ practices already used by a community. Media studies scholar Scott Curtis presents a fascinating example of the influence of a community’s habits of observation in the introduction of photography as an observational technique in medicine. Curtis argues that the adoption of photography depended not only on its automation, but also on its presentation of “a set of features that spoke to established and emerging principles and habits of observation” (85). Habituated practices of using instruments may influence users in ways that subsequently affect the development or use of an instrument, or that affect the expression or formation of the inscriptions an instrument produces. Thus, how specific practices become habituated, and how practices affect the uses of other instruments or other practices, becomes relevant to understanding how science works. Understanding how habituated practices of instrument use inform rhetorical practices then becomes important in order to fully account for the rhetorics of visual inscriptions. (Chapter 2 discusses the establishment and habituation of one set of vision practices.)

Attention to habituated practices of instrument use also contributes to the rhetoric of scientific processes and practices. Wickman and Heather Brodie Graves argue for considering scientific practices and processes as analytic objects of study for rhetoric, including the rhetoric of the processes of scientific inquiry (Wickman, “Rhetoric;” Graves, 81-142). Graves, for example, analyzes the language physicists use when engaged in the process of making science; Wickman includes nonlinguistic modes in his analysis, discussing and extending Aristotle’s concept of techne to argue that processes of scientific inquiry are rhetorical. Doing so creates a space for the rhetorical analysis of scientific practice amidst changes to that scientific practice that are spurred by changing technologies (23), including a focus on how “visuals and other nonlinguistic modes contribute to scientific meaning-making and persuasion” (23). Arguments from Wickman and Graves sketch some ways in which scientific practices and processes are rhetorical; a focus on instruments such as the STM extends the argument for analyzing processes and practices by exploring how instruments used in everyday practice and production create rhetorical possibilities.

The STM as Visualization Technology: Context, Characteristics, and Rhetorical Significance

Instruments are not only important because of their everyday use in the laboratory. Instruments are also important because they are situated within broader, complex trends in scientific and technical development. Situating an instrument within broader trends can illuminate cultural influences that are common to more than one instrument; therefore, analysis of one instrument may provide insights for the study of other instruments. The STM shares some rhetorically significant characteristics with other recent scientific and medical visualization technologies, such as PET scans, CAT scans, MRI, fMRI, and ultrasound. These rhetorically significant characteristics include the presence of complex mediation and interpretation practices, as well as the use of images to arrange and deliver large amounts of data.

A brief summary of how the STM works illustrates the complexity that Galison and Crary discuss, and introduces characteristics common to recent digital visualization technologies. To use the STM, researchers apply a weak current to the conductive metal tip of the STM. The tip is then brought close to the surface of a conductive substance (i.e., the sample). Although a gap exists between the tip and the surface atoms, when an electric charge is applied to the tip, the electrons of the surface atoms “tunnel” through the space (frequently a vacuum) between the tip and the surface to interact with the atoms on the tip (hence, the “tunneling” in the microscope’s name). The tunneling, a quantum effect, thus changes the voltage of the tip because the tunneling electrons, like all electrons, are charged. The electron cloud is exponentially sparser the further the electrons travel from the atom’s nucleus, and the voltage changes according to the density of the cloud, so that the voltage change is consistent with the distance from the surface atoms. The tip passes just above the surface of the sample using a piezoelectric element (or piezo) that either slightly expands or contracts when voltage is applied, and that produces an electric current when pushed, enabling fine control and measurement of voltage changes produced by the different densities of electrons at each spot. The STM operates in one of two modes: a constant-current mode, which keeps the current of the tip constant by allowing the tip to move up and down, depending on the electron densities the tip encounters from the surface; or a mode that measures the current of a tip that does not move up and down, but instead keeps a fixed distance from the sample. The STM then collects measurements of changes in either the movement of the tip or of the current at fixed intervals as the tip scans the surface.

To create an image from these measurements, the STM displays the data on a computer monitor, arranging the data in a matrix in the order of the sampled measurements and assigning each a value. Values are then sent to a computer monitor that assigns each value to a pixel.18 The measurements are expressed in pixels as different values in a gray scale or false color scale (“false” because atoms do not have colors: light waves are too large to optically register atoms) to create more visible variation; the accumulated matrix of values, presented in pixels, comprises the image. Pixels can be arranged in numerous formations to construct and communicate data in spatial arrangements (what we think of as a digital image, generally) or as histograms that graph numerical frequencies of the values assigned to the pixels. The imaging of data by the STM highlights the differences between measurements, and so makes visible the topographic or electronic properties of the sample.


Figure 3. “How an STM Works.” David Beck, (c) Exploratorium, www.exploratorium.edu.

Mediation and Interpretation Practices

As this summary of the operation of the STM suggests, the instrument uses extensive processes to mediate between data and image in converting measurements of phenomena into data, and then into images. Many scientific and medical visualization technologies that produce digital images engage in similarly complex processes of mediation to visualize the invisible—whether invisibility is due to location, such as within a living body (as ultrasound, CAT, PET, and fMRI technologies visualize); or to size, such as below the threshold at which light operates (as probe microscopes such as the STM visualize); or some other invisibility.19 Digital scientific visualization technologies frame phenomena (such as tunneling data) as measurable and mathematically describable—an important component of scientific work (Lynch, “Externalized Retina” 170). This is often accomplished through non-lens methods that measure non-optical attributes, such as using radioactive tracers to follow molecule or atom flows (e.g., PET scans, MRI, fMRI), ultrasound waves to measure tissue density (e.g., ultrasound), or probes to measure non-visual properties such as atom interactions (e.g., STM) or friction (e.g., atomic force microscope).

The mediation practices of the STM, like the mediation practices used in visualization technologies mentioned above, operate within broader scientific practices of using visualizations. Drawing from an ethnographic study of image-making practices in a scientific laboratory, science studies scholars Klaus Amann and Karin Knorr Cetina document the process of producing visual data in science as first seeing and then deciding what the data is; this is followed by determining what the evidence is. Each of Amann and Knorr Cetina’s three modes of practice holds different goals and practices that are complex and socially organized. Amann and Knorr Cetina comment, “just as scientific facts are the end product of complex processes of belief fixation, so visual ‘sense data’—just what it is scientists see when they look at the outcome of an experiment—are the end product of socially organized procedures of evidence fixation” (86). The mediation practices that researchers use to operate the STM and other visualization technologies fit within Amann and Knorr Cetina’s modes of practice. The practices that Amann and Knorr Cetina identify include multiple processes of mediation that are not only based on scientific practice, but may include broader social practices, thus indicating, one way in which visualization technologies form points of intersection for the “mechanical techniques, instrumental requirements, and socioeconomic forces” that Crary mentions (8). Therefore, the practices of mediating between data and image also form a site for analyzing the rhetorical work of instruments.

Along with extensive mediation practices, inscriptions tend to require extensive interpretive practices to understand what is being shown; although, visual inscriptions may present phenomena in ways that look simple or apparent. Amann and Knorr Cetina’s study underscores the complexity of the processes of visualization and interpretation in making scientific knowledge. The apparent simplicity of the image as a form for presenting data sometimes leads to confusion about how to read visual inscriptions made by instruments, even by experts.20 For example, in the early days of the STM, researchers occasionally misinterpreted what the images showed (Mody, Instrumental 12–13; Woodruff 75). In a study of PET scans, Joseph Dumit explains that some of the difficulties researchers have in interpreting scans involve not only habits of seeing, but also theoretical questions of interpretation (68–69).21 Thus, practices of mediation and interpretation of the productions of instruments form two aspects of the complexity of digital visualization technologies used to create inscriptions; interpretive practices, like mediation practices, suggest possible directions for analyzing rhetorical functions.

Images of Data

Galison’s account of the merging of two traditions of presenting evidence in microphysics in the early 1970s points to an origin for the development of recent scientific and medical visualization technologies that present large amounts of data in image form, such as the STM, MRI, and PET scans, for example (Image and Logic 570). Galison argues that the development and use of electronic images in physics for the first time allowed researchers from these two traditions to combine methods and see images as evidence. Because large amounts of data could compose images, researchers who had focused on images like photographs for evidence could use the same methods—images—as those researchers who relied on statistical data for evidence. The use of images to present evidence authoritatively within the field was so important, Galison argues, that “the controllable image” has become the main form that data has taken in the sciences (Image and Logic 810).22 Indeed, many scientific and medical visualization technologies that rely on data in image form were developed and used in the 1970s.23 The STM, too, was developed within the trend of using data to compose images: The inventors of the STM first filed for patents in 1978 in Switzerland, and then in 1980 in the US (Granek and Hon 102).24

The expression of data in image form, common to visualization technologies such as the STM and the MRI, raises questions about how the use of data-images, or informational images, affects the ways in which images function in discourse and in reading practices or assumptions about what the images convey. Amann and Knorr Cetina’s observation that the significance of data must be discovered highlights the importance of exploring how the presentation of data in electronic and image form affects the practices scientists use to discover or interpret the data. The fact that concerns about communication with informational images is common to a group of digital visualization technologies such as PET, MRI, STM, and ultrasound suggests that while the details of the context of practice for each visualization technology offer unique insights, a study charting the rhetorical effects of one visualization technology (like the STM) can be relevant to the study of the others. Further, few studies exist on the rhetorics of the creation of data images by these technologies, or on the rhetorics of digital images such as those the STM, MRI, and related technologies produce.25

The complex mediation and interpretation practices needed to express data in images also affect users in ways that suggest directions for further study of the rhetorics of instruments. Scholars in art history, science studies, and media studies recognize the importance of investigating the effects of an instrument on the experimenters or users of the instruments; the insights of these scholars provide a basis for extending analysis to rhetorical aspects. For example, Crary argues that optical instruments reconfigured the position of the nineteenth century observer, creating a change in visual practices (see 8-9 for a brief summary of his main argument as it relates to instruments). Galison and Daston examine the identity of the scientist in Objectivity, recounting how the scientific self has been shaped by historically specific scientific practices that are associated with the main “epistemic virtue” of the time, such as objectivity (191-251) or, more recently, trained judgment (357-61). Curtis explores the ways in which photography trains medical observers. While Crary, Daston and Galison, and Curtis do not focus on rhetoric, their detailed analyses of instruments in practice present historical and social models of how instruments may affect users, perhaps informing rhetorical analyses of how users are influenced by using instruments.

A Method for Analyzing Material, Embodied Interactions

This chapter performs a close reading of the operations and productions of the STM as it demonstrates how the specific rhetorical influences of a technology affect its productions (such as STM images). I focus on the rhetorics of the STM as part of the complex, material, and embodied practices of scientific knowledge-making and communication. Attention to the production processes of technologies is likely to become more and more important for analyzing the productions of those technologies, whether those productions take the form of images, databases, or other digital objects; how we make arguments with visualization technologies can be informed by the productive capacities and effects of those visualization technologies. In performing a close reading of the operation of an instrument, and considering the ways in which the STM is imbricated within social and material practices as an inscription device, this chapter highlights the material and embodied rhetorics at play in the formation of scientific knowledge, adding a rhetorical dimension to science studies of embodiment, and a science studies dimension to rhetorical studies of embodiment.26

My close reading of the STM through the dynamics of its operation also necessitates analysis of interaction; interaction is crucial to the operation of the STM. As the summary of the STM’s operations above suggests, creating images of the nanoscale involves interactions between electrons, material apparatuses (including scanning and computer components), human actions, computer software, and cultural practices—practices of seeing and organizing as well as coordinating information and things. STM operation relies on interactions among the constitutive elements of the instrument, interactions that exist not only between user and sample, as in most microscopical work (Keller, “The Biological Gaze” 112), but also at other levels, such as the interaction between the sample and the STM. Therefore, closely reading the interactions that compose STM dynamics reveals rhetorical dimensions of scientific practices of making knowledge.27

What interaction means, however, also becomes a question: In casual use, “interaction” is often over-generally applied to anything involving a computer, and the term’s definition is disputed among scholars studying digital media and technology.28 Janet Murray clarifies that what is often meant by interactivity in computers is that “they create an environment that is both procedural and participatory” (74). Rhetoricians analyzing the context of writing in new media or digital media environments focus on the participatory aspect of interactivity, without paying much attention to the participatory aspect that Murray identifies, using the term to indicate the ability of the user/reader to communicate with the user/writer. A few rhetoricians, such as Teena A. M. Carnegie, James Porter, and Ian Bogost, however, examine procedural as well as participatory aspects of interaction.

Porter, Carnegie, and Bogost suggest that the value of interaction lies in shaping audience response as a structural element, as opposed to only content—that form and content are both components of the rhetorics of interaction. Carnegie argues that the computer interface functions as a Ciceronian exordium, a rhetorical opening strategy that aims to engage the audience so that the audience is receptive to hearing the argument (165). Carnegie claims that three modes of interactivity, drawn from new media and human-computer interaction (HCI) research, multi-directionality, manipulability, and presence, are also the rhetorical modes of the interface (166). Carnegie links these three modes to higher levels of audience engagement, drawing from Sheizaf Rafaeli and Fay Sudweeks’s finding that encouraging interactivity in users produces higher levels of engagement (166). Carnegie’s exploration of how the rhetorics of engagement function in visuals such as interfaces helps make interfaces visible as rhetorical sites that structure interaction. Carnegie’s focus on modes of participatory interaction further articulates details of how the user becomes involved in the operating dynamics of an instrument.

Porter’s description of digital composition includes interaction as one element that writers in electronic spaces need to consider in their delivery decisions, acknowledging that “different types of computer interfaces and spaces enable different forms of engagement” (“Recovering” 217). Porter suggests that interaction forms a significant part of productive rhetoric for writers in digital spaces, especially with attention paid to how participation and procedure may be linked. Like Carnegie and Porter, Bogost develops a structural analysis of the rhetorical components of interaction, emphasizing the procedural rhetoric of interactivity in the context of videogames. Bogost argues that interactivity can be understood in relation to the Aristotelian enthymeme, in that videogame players supply the warrant as they play (43). Interaction allows the player to proceed through the content, or “argument,” of the game in ways that are mental, but also are embodied—such as using the joystick or pressing keystrokes to respond. Bogost also analyzes the procedures of computer games to interpret messages created by how computer games structure narratives according to user responses. While Bogost does not analyze the production processes of procedural rhetorics per se, his approach of considering the structuring of interactivity as rhetorical—and involving the user as well as elements of what the user engages with—is one applicable to production practices such as those used to create interfaces.

The concept of affordances presents one way to focus on what interactions may be most significant, and as such, most significant for understanding the rhetorics of procedural and participatory components of interaction. Carolyn Miller argues for understanding parallels between rhetoric and technology (particularly communication technologies) through the idea of affordances, quoting psychologist James Gibson’s formulation of affordances as what an environment “provides or furnishes, either for good or ill” (Gibson 127). In Miller’s view, affordances help explain how technology, like rhetoric, can lead users toward some possibilities, and away from others. Miller argues for how affordances of communication technologies function:

[A]ffordances take the form not of material properties or ecological niches [as they do for physical environments like an animal’s habitat] but rather properties of information and interaction that can be put to particular cognitive and communicative uses. Thus a technological affordance, or a suite of affordances, is directional, it appeals to us, by making some forms of communicative interaction possible or easy and others difficult or impossible, by leading us to engage in or to attempt certain kinds of rhetorical actions rather than others. (x)

Following how a technology’s affordances create particular responses in users reveals rhetorical possibilities that the affordances encourage, and even create. Extending Miller’s direction, following affordances helps articulate what properties of information and interaction encourage rhetorical actions in operating a particular technology as well as what kinds of rhetorical actions are most encouraged. Tracing the persuasive in the affordances a technology creates thus provides a way of exploring interaction while attending to an instrument’s productive, material rhetorics. Following the affordances a visualization technology creates, then—and going back to Michel Foucault’s architecture of making visible, as discussed in the introduction—helps to identify some of the available possibilities that shape what can become visible.

As I analyze the operating dynamics of the STM in light of their affordances, I also observe the interactions the operating dynamics encourage in relation to four main ways of characterizing interaction from HCI (human-computer interaction) studies in order to explore how the interaction configures possibilities for the user. In a survey of different views of interactivity from the HCI research traditions, Sally McMillan explains that the following four ways include three that are based on Claude Shannon’s model of communication as information a sender communicates to receiver: user communicating to computer; computer communicating to user; and an equal, adaptive interaction where “the computer is still in command of the interaction, but that it is more responsive to individual needs” (McMillan 175). The fourth constitutes interaction differently, in terms of what Mihaly Csikszentmihalyi refers to as “flow:” it “represents the user’s perception of the interaction with the medium as playful and exploratory” (qtd. in McMillan 173–4). “Flow” tends to include participation from both sides, so that neither computer nor user occupies either “sender” or “receiver” roles; instead, computer and user take on both roles, and so become co-creators or participants (McMillan 174). As McMillan further describes, flow is

characterized by a state of high user activity in which the computer becomes virtually transparent as individuals ‘lose themselves’ in the computer environment. Virtual reality systems seek this level, but it may also be characteristic of gaming environments and other situations in which the user interfaces seamlessly with the computer. (175)

These four models of interaction present different experiences for users; the interaction models may also generate different patterns of user response, further determining how users interface with the computer through the screen. Comparing STM dynamics to the models of interaction provides a more specific sense of how STM dynamics are structured.

Studies in rhetoric, science, and HCI identify “interactivity” as the structuring of events that involve writers/users, media, knowledge and information, and users/readers in particular configurations. A focus on “interactivity,” then, is also a focus on finding, describing, and analyzing interfaces—places of interaction, boundaries where forces or disparate elements meet. This chapter begins my focus on interactivity as I identify, describe, and analyze interfaces; Chapter 3 and Chapter 4 further analyze interfaces. In the next section, I analyze STM dynamics to identify the affordances that instruments create that, in turn, impact the shaping of inscription practices scientists engage in when using the STM. The STM dynamics influence the form of the inscriptions that help create scientific statements, and so shape nanotechnology and the concept of the atom. The affordances shape rhetorical possibilities inherent in the inscription practices that are also visible in the inscriptions themselves, as I explain below.

Manipulating Atoms: Microscope Interactions

Three main dynamics within extant visualization and instrumental traditions in science and related technologies help constitute the visualization practices of the STM: electron tunneling, raster scanning, and image processing using a graphic user interface (GUI). Each of these three dynamics structures interactions between apparatus, user, data, and the nanoscale; informs how instruments mediate the transformation of phenomena to data and to image; and helps structure how scientists interpret the data in the image. While each of these main dynamics functions separately to some extent, the interactions between the dynamics combine, expanding connections and enhancing the intensities that each may possess alone. The coordinated interactions of the dynamics of electron tunneling, raster scanning, and GUI image processing then enable the STM to function, producing and arranging data about the nanoscale, thus affecting what STM images convey and how the images do so. The coordinated interactions of the STM operating dynamics structure the possibilities for making atoms visible—and also help create the productive rhetorical possibilities of the STM.

Electron Movement: Tunneling Electrons and Interactive Surfaces

One of the major dynamics on which the design of the STM is based relies on the interactions between a conductive surface (composed of a metal, for example) and the microscope tip, as the tip does not contact the surface, but remains about a nanometer away (Mantooth 9). Instead of contact, the interaction between tip and surface is a result of electron tunneling. Tunneling is based on the articulation of electrons as both particles and waves in quantum mechanics, where “each electron behaves like a wave: its position is ‘smeared out’” (Binnig and Rohrer, “The Scanning Tunneling Microscope” 52). The behavior of electrons as both particles and waves allows surface electrons to “tunnel” through the barrier of the vacuum between surface and tip atoms, and thus interact with the electron cloud of the atoms or atoms on the tip. Measurements of the tip’s electron clouds through voltage thus presents a way to understand the surface atoms through the behavior of the behavior of the atoms. Use of electron tunneling as a measurement technique in the STM is part of a broader trend in creating images from non-optical data, and has implications for what is able to be visualized with the instrument.

Electron tunneling is a relatively new idea; the incorporation of electron tunneling into the STM shows how the dynamic fits into the larger story of the development of non-lens-based visualization technologies. In 1960, Ivar Giaever first published the results of demonstrated electron tunneling (Giaver 147–48). For his research, he received the Nobel Prize in 1973. However, scientists did not apply electron tunneling to instrument development until the early 1970s, when Russell Young, John Ward, and Fredric Scire created a machine called the “topografiner” that, like the STM, used electron tunneling and three-dimensional scanning to measure “the microtopography of metallic surfaces,” but used a field emitter instead of a tip to create tunneling conditions (Young, Ward, and Scire 999). The topografiner was not very successful in achieving measurements due to interference from outside vibrations, caused by people walking in the building, for example. In the early 1980s, STM inventors Gerd Binnig and Heinrich Rohrer, along with Christoph Gerber and Edmund Weibel, reduced outside vibrations enough to measure the tunneling and develop the STM (Binnig et al. 178–180).29 The story of how electron tunneling became a measurement technique illustrates the complex mediation required of some visualization technologies, mediation anchored in the practices of a larger community.

The use of electron tunneling to visualize atoms affects what can be measured as well as the relations between the different atoms interacting at the interface of the vacuum. What is measured is the atomic movement that enables electron tunneling, not an object such as an atom. Recording the interactions of electrons with other electrons in a vacuum also transforms the distance between tip and surface electrons into a dynamic interface. To create data points, then, the tip passes across the area to be imaged, sampling the changes in voltage produced by the different densities of electron interactions at different spots. Therefore, the STM maps encounters, local events of tunneling.

One effect of the use of electron tunneling to create measurements is that both tip and sample can affect the interaction (and thus the measurement). For example, the tip’s characteristics can affect the sample and the image produced from the sample, establishing a multi-directional affordance that both creates and structures the dynamic between tip, sample, and resulting image. As STM textbook author Chunli Bai explains: “The size, nature and chemical identity of the tip influence not only the resolution and shape of a STM scan but also the electronic structure to be measured” (9). For example, the conical shapes that form the rim of the Quantum Corral (Figure 4) image are not “what atoms look like;” instead, the shapes are effects of the tip’s V-shape, and reflect the tip’s traverse from one level to another while, at the same time, moving across the sample surface. In short, the shape is more like a graph of the tip’s movements over time (Russell). One result of the mutual influence of tip, sample, and image is that the shape of the tip becomes important: Ideally, one atom at the tip end should protrude slightly (even just an Ångstrom) from the others so that the applied current can flow through that one small point (J. Foster 17).30 The sample, too, can affect the tip if the atoms of the sample strongly attract and “pull off” some of the tip’s atoms. In the space of the interaction, both tip and sample meet in the vacuum, and are equally able to affect the interaction.

The use of electron tunneling also creates the possibility of continued interaction, like MRI or PET scanning, as neither sample nor tip is damaged in collecting measurements (unlike, for example, electron microscope samples that are destroyed in the imaging process). The fact that the sample is not destroyed allows researchers to collect data repeatedly over the same space, and thus track dynamics over time; the user can experience atoms as a series of movements. A Journal of Physical Chemistry B article provides an example: S. A. Kandel and P. S. Weiss state that “by comparing sequentially recorded images, [they] can see that the size and shape of the clusters [on the sample] change over time” (8103). Kandel and Weiss explain the effect of the tip on the measurement: “the mobility shown in the rearrangement of these clusters is likely (at least in part) induced by the STM tip” (8103). The dynamic that occurs between the tip and the sample as the STM collects data is similar to the HCI interaction type of “flow,” where a user experiences a merging with the computer. In this case, of course, the interaction occurs between atoms. Although the tip-sample interaction does not directly involve the user, electron tunneling affects the user’s experience of atomic phenomena through a multi-directional affordance and the related possibilities for repeated interaction. The structured dynamic of the tip-sample interaction that also produces nanoscale measurements affects what can be shown in an image and affects experiment design, just as the use of electron tunneling allows researchers to manipulate the surface using the microscope’s probe affordances. Thus, the expectation of a certain kind of multidirectional interaction inheres at the level of collecting data.


Figure 4. “Quantum Corral” image. From Science 262, 5131 (8 Oct., 1993). Cover illustration. Image originally created by IBM Corporation. Reprinted with permission from AAAS.

Raster Scanning and Z-direction Moves

The STM tip moves in three dimensions (x, y, and z, in the Cartesian system), forms other constitutive components, and works with electron tunneling to shape the multi-directional affordance of the STM. The microscope’s design, like that of the scanning electron microscope, relies on a mobile tip. The fact that the tip moves—and must move—to survey the sample affects how the STM measures each point. The tip moves in three different directions: in the x and y directions in a raster pattern to collect data about points on the surface, and in the z (up and down) direction to create the tunneling conditions in which tip and surface electrons interact. Movements in x, y, and z directions coordinate with tip-surface tunneling interactions to collect data about the nanoscale; the x, y, and z movements also are part of broader traditions of dynamics used by other visualization technologies. In the STM, the x, y, and z directional dynamics structure interactions that encourage manipulation by the user.

The process of rastering, or scanning the surface with a back-and-forth pattern, from which the STM builds an image from data, forms a key dynamic in other visualization technologies. Derived from radar’s creation of an image from a signal, rastering is perhaps best known as the method that allowed cathode ray tubes, such as those used in non-digital televisions, to build an image from a linear signal. In microscopy, using rastering to image a sample had been considered since the early twentieth century, but use of rastering was first demonstrated in 1972 (Wickramasinghe 78). The STM-like topografiner also scanned microscopic surfaces and collected measurements to create an image of the surface, although the specific pattern the scanner makes is not described (Ward, Young, and Scire 999). The incorporation of rastering into scientific visualization instruments such as the topografiner in the early 1970s also fits into the trend of developing visualization technologies that Galison discusses. The STM, then, is partly formed by the “tradition” of dynamics of rastering.

In the STM, rastering only works in conjunction with measurement of tip-surface interactions in electron tunneling, because the dynamics involved in raster scanning are structured around the challenge of movement in relation to time. To create anything other than a blur, a sample would need to remain relatively still, and the tip would need to move relatively quickly. As Lev Manovich observes, one implication for images produced using rastering is that “It is only because the scanning is fast enough and because, sometimes, the referent remains static, that we see what looks like a static image” (100). However, atoms do not slow down enough to become referents; so, following Manovich’s explanation, rastering would not work in the STM unless rastering is combined with the tip-surface interactions that provide the “stability” of a measurement of movement. Even so, STM researchers often need to correct for “drift” (or, when the sample atoms move before the tip has finished scanning) and also, at times, slow the sample atoms down by lowering the temperature to, for example, four degrees Kelvin so that the STM can scan the atoms.31

The raster scan allows STM users to convert tip-surface interactions into a camera of sorts. In so doing, the STM does not beam electrons (like a television camera) to assemble an on-screen image; instead, the STM forms what could be called a haptic camera, a “camera haptica” (playing on “camera obscura”), as the STM converts a series of interactions between atoms in the vacuum into a series of spatially arranged data points. The use of interactions as measurements moves the visualizing process closer to the sense of touch than the sense of vision or, more accurately, in a merging of the two senses—to haptic vision—because the tip gathers data about local interactions over a series of contiguous spots, and then presents the interaction data in a two-dimensional matrix, an image form. (The concepts of “camera haptica” and haptic vision are discussed further in Chapter 2.)

The rastering movement plays a role in the design of experiments, as researchers engage in practices that rastering affords. For example, Kandel and Weiss, in the experiment mentioned above, record images “with the tip rastering quickly along different directions, and . . . see a correspondence between this ‘fast scan’ direction and the locations at which atoms in the cluster either attach or detach” (8103). Kandel and Weiss present four versions of the same atoms that have been scanned in different ways, and use these versions to explore atomic properties (8104). Kandel and Weiss’s experiment design is one example of how the tip’s movement forms an experimental tool, allowing researchers to interact with surface atoms through the STM.

The ability of the tip to move up and down in the z direction also affords manipulation, because the tip can measure the three-dimensional electronic or topographic qualities of the surface. The ability to move in the z direction has made the distance between the tip and the sample a critical component of the operation of the STM from the beginning: Soon after Binnig and Rohrer developed the STM, before they achieved atomic resolution, Binnig and Rohrer “had to struggle with resolution, because Au [gold, their sample] transferred from the surface even if [they] only touched it gently with [the] tip” (“From Birth” 398). In 1987, R. S. Becker, J. A. Golovchenko, and B. S. Swartzentruber repeated this “mistake” of touching the surface with the tip; as a consequence, the atoms moved. In a letter published in Nature, Becker, Golovchenko, and Swartzentruber reported “atomic-scale modification” of a sample surface after they applied voltage to the tip. They attributed the modification to the transfer of a tip atom to the sample surface (421). Others repeated the experiment of using voltage pulses to “pin” molecules and atoms to a surface (J. Foster 29–32). In another experiment with the STM, for example, R. C. Jaklevic used the STM to dent a piece of gold, and then re-scan the same sample to measure how quickly the dent filled itself in (six to nine atoms per minute), thus using the STM as a tool to make the event of atomic movement measurable (Jaklevic 659).

The x, y, and z directional dynamics structure what can be seen with the STM. The STM images events through measuring change and movement, much like other visualization technologies; PET scanning, for example, images dynamic processes. The combination of x, y, and z directions that intensifies multi-directional interaction dynamics also affords researchers opportunities to manipulate individual atoms. For example, in another experiment, researchers J. G. Kushmerick et al. moved a nickel atom in order to observe how far (and how) atoms hop from place to place (2983). However, until Eigler and Schweizer published the article in Nature with the “IBM” images, researchers had not announced the manipulation of atoms by actually picking them up, “dragging” them, and repositioning them—a practice that the ability to move in x, y and z directions made possible. Although large-scale or automated manipulation of individual atoms is not a common use of the STM, even as I write this, manipulation of the surface becomes almost encouraged due to the dynamics created by the microscope’s data-gathering process.

The coordinated x, y, and z directions also structure possible interactions between user and atoms through the arrangement of atomic interactions. The focus on measuring a series of interactions—and the ability to intervene, to change the interactions without destroying the sample—all encourage users to participate in the data-gathering process, as the measuring process becomes part of the experiment. The STM, then, is both a data-gathering probe and an experimental probe—and also an image processor, as explained in the next section.

GUI Image Processing Dynamics

The data produced through coordinated tunneling, rastering, and z-direction interactions is arranged in a matrix to create an image of atomic phenomena. Following the general process of scientific visualization, the raw data is arranged and thus transformed into images that viewers can interpret (see, for example, Brodbeck, Mazza, and Lalanne 30–31). The computer makes image creation relatively easy; however, despite the fact that Binnig and Rohrer worked for IBM when they developed the STM, they did not use computers to develop or operate the first STMs. Instead, Binnig and Rohrer created the first STM images by mentally building up atomic visions using an oscilloscope monitor’s two-dimensional trails of the tip’s rastering (Mody, “Intervening”). For a time, Binnig and Rohrer resisted the computer when they first began presenting their work. The first public STM image, outside of their own personal visions, was created from oscilloscope lines that Binnig and Rohrer traced onto cardboard, cut out, and glued together to form a three-dimensional model of the sampled surface (Binnig and Rohrer, “From Birth” 401).32 The history of STM image development suggests that Binnig and Rohrer participated in imaging processes that encompassed or paralleled computer imaging capabilities, but did not entirely stem from use of the computer. (Chapter 3 further discusses imaging practices beyond computer-aided visualization, in relation to pictorial conventions.) However, Binnig and Rohrer switched over to the computer to generate images for publication. Other STM users followed suit when arranging their data, as the GUI (also developed in the 1970s, when the wave of non-lens-based scientific visualization technologies mentioned above occurred) afforded extensions of the interactive dynamics initiated by the data-gathering operations of the STM.

Characteristics of GUI also structure interaction, affecting what is imaged and how the resulting image is communicated. The fact that GUI use is now ubiquitous in scientific and medical visualization technologies (as well as in other technologies) makes the interaction dynamics of the GUI seem almost invisible; however, the interaction dynamics the GUI encourages help trace the influence of the GUI in STM use and in the production of STM images. The GUI affords STM users interaction with on-screen visual objects to manipulate in order to explore, change, or image the data. Interaction can affect the visual objects, as well as the data, with which the user engages. As information visualization researchers Dominque Brodbeck, Riccarde Mazza, and Denis Lalanne explain, with the use of computers (and the GUI), “graphical objects are not static anymore but can be interactively manipulated and can change dynamically” (29). In GUI interactions, the user expects to respond to visual objects (such as elements of images, or icons) as behavioral cues for manipulation, not solely in order to understand their meaning, or signification (Drucker, “Reading Interface” 215).

GUI characteristics affect STM use as well as image processing, as the STM user interacts with the GUI screen image in multiple ways, including while conducting the experiment, interpreting data from the experiment, processing the image for publication, and processing the image further if the image is intended to function outside of scientific journal articles (such as in press releases or on research group web sites). The user’s interactions with other dynamics, such as x, y, or z direction, allow the on-screen image to operate as an interface, thus allowing the image to function as an experiment and to help marshal evidence. GUI interactions structure the space in which the user interacts with the image, but also create a space in which the user interacts with the data and through the image with the nanoscale. The range of possible interactions that GUI enables reinforces the multi-directional and manipulable affordances created by electron tunneling, raster scanning, and z-direction moves.

Images and Experiments

GUI interactions also extend the time the researcher spends with the image and the data, further intensifying the imaging process. The arrangement of data in an image on a computer screen allows the user to turn the image into an experimental interface, coordinating with and amplifying the multi-directional affordance of the tip-surface interaction, or into what Daston and Galison call the “image-as-tool” (414). For example, in one of the experiments mentioned above, Jaklevic used images created by the STM to monitor an experiment. As interfaces, the STM images Jaklevic produced to observe the behavior of gold became events through which the researchers conducted the experiment. The involvement of Jaklevic and other STM users with the image (and the data) was intensified by incorporating the physical actions of the experimenters into repeated scanning and image constructions in order to conduct the experiment. The “Quantum Corral” image (Figure 4) is also a good example of how STM use becomes an event, as Eigler and other researchers created the nano-corral structure in order to conduct experiments on the electron standing waves they “caught” inside (Crommie, Lutz, and Eigler 218–220). The ability to use the STM image as an experimental space—to build structures (as in the case of the corral) or to observe events (as in Jaklevic’s experiment)—is part of what makes the STM a significant visualization technology. As Gimzewski et al. comment in the journal Surface Science: “Scanning probe microscopies (SPMs) . . . , and in particular scanning tunneling microscopy (STM) . . . , have revolutionized the real-space imaging of molecules, providing a detailed understanding of the ways in which they interact with each other and with adsorbents” (101). Gimzewski et al.’s observation highlights the focus on using the STM as a scientific tool for understanding interactions.

The STM encourages further interaction with the data through the image, as researchers engage in understanding the significance of the data. Amann and Knorr Cetina explain the mode of practice that involves assessing significance as one in which scientific visuals “act as a basis for sequences of practice rather than observation at a glance. They [visuals] are subjected to extensive visual exegeses, rendering practices which attempt to achieve the work of seeing what the data consist of” (90). GUI characteristics suit STM images well to this interpretive task, thus extending users’ interaction with the images. As one scientist I interviewed explained,

It’s [the experiment is] very much like putting something on a surface, seeing what it does, and trying to figure out exactly how it’s behaving, and there’s a lot of control experiments that you do between changing biases, changing tunneling currents, changing processing of the sample, to confirm like exactly what’s happening, how you’re looking at it.33

The ubiquity of the GUI, and the ability of researchers to use the same GUI to continue to interact with the image and explore and arrange data, while also making sense of the data and then producing images as evidence, all allow the user to go back and forth between Amann and Knorr Cetina’s observational, interpretive, and evidence-producing modes of practice. While microscope users have almost always manipulated samples being viewed to produce images, the GUI intensifies and structures the involvement of the STM user in all stages of image production (Keller, “Biological” 110). The STM user’s involvement creates a different relation to the image than if, for example, the user positioned the sample in an electron microscope and created an image, because the sample is destroyed in the process of viewing through an electron microscope, making only a limited amount of interaction is possible.

The STM user’s involvement in image processing also encourages users to engage in continuing dynamics with the image and atoms; not only is it possible to engage with the sample again, but it is also possible to engage with the image again: The electronic screen affords the possibility of “refreshing” the image, just as the fact that the sample is not destroyed in STM scanning affords the possibility of “refreshing” the data through another raster scan. The GUI amplifies the effect of dynamic, manipulable images, data, and atoms. GUI dynamics thus structure interactions that invite multiple encounters, increasing a sense of immersion that heightens the feeling of engagement, as Rafaeli and Sudweeks explain. The affordance of the STM that allows users to repeatedly interact with samples through the image also reinforces engagement with the data—also perhaps suggesting the individuality of atoms through that repeated interaction.

Emphasis on manipulation as opposed to observation alone enters discourses about how the STM and its images can be used. For example, in an article describing imaging with the scanning tunneling microscope that appeared six years after Binnig and Rohrer won the Nobel Prize for developing the STM, IBM physicist John Foster writes, “After imaging a molecule, the next step is to do something to it” (26). Discourses about imaging functions also emphasize manipulation. More recently, a 2006 National Academy of Sciences Board on Chemical Sciences and Technology report on challenges and possibilities for chemical imaging acknowledges expanded uses of images that allow scientists to “do something to” what they see (Board 14).34 Among important challenges for imaging, the report lists “understand[ing] and control[ling] complex chemical structures and processes”; “understanding and controlling self-assembly”; and “understanding and controlling complex biological processes” (Board 22–25). While the Board’s report does not focus entirely on the STM, the report includes the STM as one of the visualization tools that can help researchers meet these challenges (114–21).

Producing STM Images: Image Processing and the STM User’s Role

The interactive dynamics of the STM extend to the STM user’s interpretation of the data, as well as the production of images as evidence through the tools of image processing. A key point of image processing, as John Russ explains in his Image Processing Handbook, is that “image processing, like food processing or word processing, does not reduce the amount of data present but simply rearranges it” (xiii). Russ’s mention of arrangement in relation to image processing highlights one of the affordances of the GUI that becomes significant in structuring STM dynamics. While researchers using non-digital imaging processes may plot a graph from numbers or photograph experimental results, non-digital imaging processes limit how much researchers can change the graph or photo after production without also changing the data. In contrast, the process of digital image production associated with the STM allows researchers to be involved longer with the image during and after data collection. Prolonged involvement includes interaction with the data during what Amann and Knorr Cetina articulate as the transformation of data into images for publication that function as a “way of visually reproducing the sense of ‘what was seen’” (114).

Like other digital imaging practices, the image production processes of the STM also incorporate the user into GUI practices that are contingent on interaction. Arranging information in visual form, and in the form of pixels, allows the STM user to continue the imaging process for far longer than a developer’s involvement with optical film, extending the time the researcher participates in the imaging process. Michael Lynch’s explanation of an extended imaging process in his study of the digital image productions of astronomers provides a sense of what also occurs with the STM images: “The real-time work of digital image processing involves a play at the keyboard, where images on the monitor are continuously recomposed by changing the palette, using touch-screen routines, plugging in parameters, and trying out different software manipulations” (“Laboratory Space” 72). The extended time that digital imaging processes require allows researchers to continue interacting with the data through the image and with different imaging techniques to develop images that contain experimental evidence.

To prepare an image for publication, researchers interact with the image to “clean it up,” often by filtering the data. In an article that appeared soon after Nature published Eigler and Schweizer’s images, in the IBM Journal of Research and Development, E. P. Stoll explains that raw data needs to be processed further due to interference, or “noise”35 (following Shannon’s division of information received into two categories, signal and noise), including noise that creates stripes “visible in nearly every real STM picture” (69). Therefore, some manipulation of the image, what Stoll calls “picture processing,” tends to occur (87). Some STM researchers do not present filtered images in journal articles; however, “cleaning up” data is a common scientific visualization practice (Brodbeck, Mazza, and Lalanne 31).36 Various ethnographers have discussed data processing as part of scientific image production. For example, Lynch explains data processing in astronomy images stating that raw data are “not treated as a pristine reflection of ‘reality’ but as the residue from a confused field where electronic noise, detector defects, ambient radiation, and cloudy skies mingle indiscriminately with the signal from a source object. The processed image is often considered the more accurate and ‘natural’ rendering” (“Lab” 70).37 The impetus for “cleaning up” data, then, derives from larger habits of scientific visualization, and also contributes to further encouragement of interaction.

To filter data composing the image, STM researchers can use multiple techniques that further structure users’ engagement with the data. For example, the data might reflect “drift” as the sample shifts over the time it takes for the raster to scan the surface: researchers may correct for drift, and then need to crop the picture.38 Indeed, filtering is so important that scientists Sutter et al. have presented a way to filter the image at the level of data recording, through using a semiconductor STM tip that limits electrons in certain energy ranges even before getting to the image form (166101). In this case, scientists further incorporate the imaging software within the experimental apparatus, and blend data gathering and image processing. Alternatively, a researcher might use filters to enhance the contrast between light and dark, or to smooth out the contrast between different sections of the sample to see details. As one scientist explains,

[I]n terms of daily usage, we generally . . . use other image manipulation techniques like taking the derivative of the surface. So then if you have a step [a point at which two uneven planes on the surface join like a stair step or terrace], if you take the derivative of the step, it just shows as a spike where the step is. So essentially, the contrast associated with having two different terraces at different heights goes away and so now those terraces appear like they’re at the same height.39

Filtering techniques allow researchers to sort through data in order to begin interpreting the data: the researcher quoted above continues, stating, “these are actually the terraces, the same terraces we saw before, but here they’re all a uniform gray color now. And now you see that these patches, which is actually what we’re studying, you can clearly make out what actually turns out to be the atomic resolution in the patches.”40 Other filtering techniques include using Fourier transforms that show the frequency range of the data, helping to make the data measurable. Another researcher explains, “So if you look at a silver atom lattice [the structure the atoms create], I can get the periodicity of that [by taking the Fourier transform], take the inverse of that, and it gets me back to real space, and it will tell me that my lattice spacing is five Ångstroms.”41 These and other techniques allow researchers to highlight what information they consider important or to focus on more specific details such as the size of phenomena, for example. As researchers engage in highlighting data, they change how the image looks, and yet do not alter the data set. STM users may even process images further for cover slides for presentations, journal cover images, or for other scientific imaging contests beyond scientific papers.42

During the imaging process, the researcher also draws on other judgments and experiences so that the images created are, as Stoll comments, “aesthetically pleasing and informative and convincing” (76). Deciding to use color demonstrates some of the dynamics involved. False color has become a component of quite a few STM images, especially those appearing on journal covers and web sites (see Chapter 3 for more on color; also see Hennig, “Changes”). Many researchers apply color to highlight differences among pixel values. For example, researchers can highlight the three-dimensional appearance of the surface through assigning color- or gray-scale values to the various heights; researchers can also simulate illumination to introduce shadows or shading (Stoll 72). About the color choices in one image, one scientist explains,

It [color] certainly can help to clarify the presentation because you can accentuate contrast in regions . . . which contain the point you’re trying to make rather than have the reader be distracted by contrast related to things that you aren’t worried about right now . . . . [In this image, for example,] you see all these lines here for atomic steps on the surface. So, if you look at that in black and white, your colors, your gray-scale has to be stretched to accommodate all those steps, and these steps are actually larger than any of the usual features on the surface. So some kind of stretching of the scale of the image has to be done in order to see the fine features that generally you’re interested in. . . . So, if you look at the image just in gray-scale, it doesn’t appear so clear because all the contrast is taken out by the steps rather than by the little bumps on the surface that you want to study.43

Adding color to STM images helps viewers grasp slight variations in value; Russ explains that human eyes can only pick out about twenty to forty shades of gray in an image, but can differentiate between hundreds of colors (35). Russ also observes that colors help humans verbally refer to parts of an image through different colors, as opposed to different shades of gray (35). Color allows viewers to not only distinguish differences in value, but also to import the value differences into language—another medium.

While added color helps viewers verbally and visually distinguish an image’s particular characteristics, and so eases the entry of the presented value into scientific discourse, the researcher cannot rely on a color correspondence or order while choosing colors to use. Edward Tufte explains, “Despite our experiences with the spectrum in science books and rainbows, the mind’s eye does not readily give a visual ordering to colors, except possibly for red to reflect higher levels than other colors” (154). Therefore, adding color to an image is dependent on the user’s or group’s decisions and previous associations with color. In his study of the digital image production of astronomers, Lynch explains that the dependence on individual or group associations to assign meanings to colors can lead to the user’s reinforcement of her or his expectations of the sample through color choice: “Color becomes iconic when used for color enhancement or when signifying intensity or red shift. That is, the code is selected intuitively to suggest properties the object should have” (“Lab” 71).

In images of the nanoscale, the question of whether color correspondences exist is unsettled. Science studies scholars Arie Rip and Martin Ruivenkamp observe that color choices are not fully determined in the nano-researcher community: Some researchers see some colors as commonly used for certain features or attributes and cite, for example, the default colors in imaging software (29). Figuring out color schemes involves the researcher’s preferences. One scientist I interviewed commented, “[Y]eah, so there’s some playing that goes on with false coloring and just looking at what highlights the features that you want to show. Those [colors] aren’t—of course, those aren’t real.”44 Another commented that he would try out different colors and would make a file of those images, images he would later return to, and choose a version he liked best.45 Using color, then, engages STM image-creators in another set of interactions to determine how to best present the data to users. Like other filtering techniques, choosing color extends the process of interacting through the GUI; choosing color engages the user in practices that also include scientific and extra-scientific cultural elements and conventions. By changing the appearance of the data and the image, chosen colors can also affect how data is read.46 As color choices are not based on a predetermined order whose meaning viewers will immediately understand, but instead are based on previous (and most likely unexamined) color associations as well as the experience of color in the image, color choices affect the viewer’s response to the image and his or her perceptions of what the image depicts.

Habituated Interactions: Coordinated Dynamic Effects

The operating dynamics of electron tunneling, movement in x, y, and z directions, and GUI use are separately identifiable in STM operations as well as in the dynamics of other visualization technologies. In the STM, however, the coordinated contributions of electron tunneling, movement in x, y, and z directions, and GUI use allow an intensification of the kinds of multi-directional interactivity and manipulability afforded individually by each dynamic. In the process of operating, these three dynamics constitute an instrument that encourages manipulability and interaction—interaction with and through images and data (as one might expect with GUI) , but also with atomic phenomena. As part of the coordinated dynamic of interaction, the user’s involvement intensifies, allowing for an increase in the user’s feelings of engagement (Rafaeli and Sudweeks). The affordances created by the coordinated dynamics of electron tunneling, movement in x, y, and z directions, and GUI in turn create rhetorical possibilities that are, in part, fueled by the feeling of engagement: manipulation and interaction themselves become rhetorical. Thus, manipulation and interaction form a strand of persuasive possibility that helps structure a user’s experience within the STM, and also engagement with other technologies whose dynamics may operate similarly.

The increased engagement encouraged by the operating dynamics of the STM resonates with the last type of human computer interaction, “flow,” further explaining how rhetorical experience with the STM is structured. As mentioned above, flow tends to include participation from both sides so that neither computer nor user occupies the positions of “sender” or “receiver.” “Flow” may most accurately describe how the human user interacts with the screen and microscope apparatus because users engage in repeated interactions as they manipulate the sample and the image—interactions that build on responses from the sample, user, and image, and also take on a playful quality. Researchers’ descriptions of using the STM echo the characterization of flow. Scientist Gimzewski and artist Victoria Vesna explain in a collaborative article, “through images constructed from feeling atoms with an STM, an unconscious connection to the atomic world quickly becomes automatic to researchers who spend long periods of time in front of their STMs. This inescapable reaction is much like driving a car—hand, foot, eye and machine coordination becomes automated” (11). In describing the experience of using an STM to someone who has never used the microscope, one scientist I interviewed responded by making an explicit link to computer games:

Well, it’s kind of like a late-seventies video game . . . . You’re looking at a computer screen and you see little blobs on the screen which correspond to, you know, typically will correspond to a single atom or molecule on the surface that you’re looking at, and you know, on good days you can manipulate these atoms or molecules, and then it becomes a lot more like a video game because you actually—most software interfaces are mouse based. If you want to manipulate something, that involves usually moving the mouse and clicking and then moving it somewhere else and clicking again. So that’s almost like, you know, Ms. Pac Man or something like that . . . . So it can be quite fun.47

Another respondent answered, “it’s hypnotic, I would say . . . . You go in and you’re exploring a part of the world that nobody has seen because of that scale and you often don’t know what you’re gonna find and you usually don’t understand what you’re seeing.”48 Other scientists, such as Eigler, remark on the playful qualities of exploration with the STM and also on some of the excitement they feel as they manipulate atoms (Eigler, “From the Bottom Up” 425; also see Chapter 4). For these users, the hypnotic, game-like effects of using the STM may not only influence their perceptions of what they are doing (e.g., making STM use more “fun”), but also how the researchers perceive themselves and the instrument. As N. Katherine Hayles explains about her experience with virtual reality:

I can attest to the disorienting, exhilarating effect of the feeling that subjectivity is dispersed throughout the cybernetic circuit. In these systems, the user learns, kinesthetically and proprioceptively, that the relevant boundaries for interaction are defined less by the skin than by the feedback loops connecting body and simulation in a technobio-integrated circuit. (How We Became Posthuman 27)

Whether or not users experience feedback loops, as Hayles describes, or a merging with the machine, Hayles’s comment shows how users may attune themselves to operating beyond the confines of what is considered the boundary of the body and, in a transformative, prosthetic relation, merge or fuse—physically and mentally—amidst the dynamic that constitutes the interaction. The habits of interaction learned while using the STM may be longer lasting; as Hayles comments, while learning how to interact in a virtual reality simulation, “the neural configuration of the user’s brain experiences changes, some of which can be long-lasting. The computer molds the human even as the human builds the computer” (How We Became Posthuman 27; see also Hayles, How We Think, Chapter 4). The interaction, the “flow,” may then create longer-lasting patterns and, as with Hayles, above, Manovich and others have pointed out that the habituated patterns may also affect how the user responds to other interactive situations and technologies beyond the STM (136). As Manovich comments, for example: “As we work with software and use the operations embedded in it, these operations become part of how we understand ourselves, others, and the world” (136).

Indeed, one benefit of studying specific interaction dynamics that reach beyond the particular instrument, such as electron tunneling, movement in x, y, and z directions, and GUI use, is to see how the dynamics that structure STM interactions may influence trends in instrument use, such as the various uses of computer media—and through computer media, other scientific and medical visualization technologies. The fact that interaction dynamics such as rastering and GUI are also common to other technologies supports the claim that they may indeed become habituated interactions, so that one expects to manipulate, to interact with atoms, much like one expects to interact with an onscreen digital image. The dynamics thus import their own rhetorical power to the use of visualization technologies of which the dynamics are a part. It is no surprise, perhaps, given the presence of dynamics such as electron tunneling, movement in x, y, and z directions, and GUI use, that the STM includes such an emphasis on manipulability and interaction, but what is unique is how these operating dynamics combine to help create the particular space of flow in which users are persuaded of the manipulable, almost tangible, atom. The coordinated operating dynamics encourage specific interactions from image viewers in order to read them, as Chapter 2 and Chapter 3 discuss. The dynamics also affect the productions of the STM in ways that become important to understanding the rhetorical possibilities and changing discourses of nanotechnology in which atoms become tangible and able to be manipulated.

Interaction and Envisioning: Images, Information, and Atoms

In addition to the effects on the user discussed above, the combined dynamics of electron tunneling, movement in x, y, and z directions, and GUI use that help constitute the STM also help shape its productions, such as the image produced by the STM, the concept of information implied in this process, and nanoscale phenomena—such as atoms—that become visible through the imaged information. The following sketches trace the effects of the dynamics described above through productions of the STM in order to elaborate on what is persuasive about the STM, and to show the significance of the dynamic constitution of the STM. The following discussion also articulates aspects of the space of visibility created by STM use, and suggests how STM dynamics may affect our understanding of the nanoscale and of our world, as well as how we may have also changed in order to see atoms and molecules.

Imaging Interaction: The STM Image

Haptic Visions

Подняться наверх