Читать книгу Research in the Wild - Paul Marshall - Страница 9
ОглавлениеCHAPTER 1
Introduction
1.1 RESEARCH GONE WILD
It is now quite common to see the phrase “in the wild” inserted into the title of a human-computer interaction (HCI) paper. Examples include “Doing innovation in the wild” (Crabtree et al., 2013a), “Being in the thick of in the wild” (Johnson et al., 2012), and “A robot in the wild” (Williams et al., 2014), as well as abbreviated versions such as “Leaving the wild” (Taylor et al., 2013) and “Calls from the wild” (Cappadonna et al., 2016). Besides attracting eyeballs (“the wild” sounds more intriguing than the more prosaic “An in situ study of…” or “An Investigation into…”) this trend reflects a shift in how research is being carried out in HCI. Increasingly, researchers are going into people’s homes, the outdoors, and public places, to study their reactions to, use, and appropriation of a diversity of technologies that researchers have provided them with or placed in that location. Examples include exploring the co-creation of a street graph depicting changes in electricity consumption for a community (Bird and Rogers, 2010), the use of mobile devices for tracking people’s health (Consolvo et al., 2008), and exploring how robots can assist the well-being of visitors in hospital wards (e.g., Dahl and Bolous, 2014). In addition, researchers are working and participating more with communities, designing and deploying technologies in situ that address the latter’s concerns or needs. Theory has also been rethought in terms of how it can inform, extend, or develop accounts of behavior that is situated in naturalistic settings and in the context of socio-technical practices.
Research in the wild (RITW) is generally considered as an umbrella term to refer to how, what, and where research is conducted in naturalistic settings (Crabtree et al., 2013b). Its overarching goal is to understanding how technology is and can be used in the everyday/real world, in order to gain new insights about: how to engage people/communities in various activities, how people’s lives are impacted by a specific technology, and what people do when encountering a new technology in a given setting. The output can be used to inform the development of new understandings, theories, or concepts about human behavior in the real world. This includes rethinking cognitive theories, in terms of ecological concepts (e.g., situated memory) and socio-cultural accounts (e.g., the effects of digitalization on society). More specifically, RITW can be concerned with investigating an assumption, such as whether or not a technology intervention can encourage people to change a behavior (e.g., exercising more). It can be operationalized in terms of a research question to be evaluated in the wild, such as: will providing free activity trackers to employees to encourage them to develop new social practices at work (e.g., buddying up, competing with each other) that will help them to become fitter and healthier? The perspective taken for this kind of RITW is to observe how people react, change and integrate the technology in question into their everyday lives over a period of time.
RITW is broad in its scope. Some have questioned the need for yet another term for what many HCI researchers would claim they have been doing for years. Indeed, applied research has been an integral part of HCI, addressing real-world problems, by conducting field studies, user studies and ethnographies. The outputs of which are intended to inform system design, often through community engagement. So, what is the value of coining another label? We would argue that, first, it is now widely used not just in HCI, but also in a number of other disciplines, including biology and psychology, reflecting a growing trend towards pursuing more research in naturalistic settings. Second, the term is more encompassing, covering a wider range of research compared with other kinds of named methodological approaches, such as Action Research, Participatory Design, or Research Through Design. Initial ethnographic research, followed by designing a new user experience, together with the application and/or development of theory, technology innovation, and an in situ evaluation study are often conducted all in one RITW project.
Hence, while the various components involved in RITW are not new, a single project often addresses several of them. Rather than focus on one aspect, e.g., developing a new technology, advancing a new method, testing the effects of a variable or reporting on the findings of a technology intervention—research in the wild typically combines a number of interlinked strands. Technology innovation can initially inspire the design of a new learning activity that in parallel is framed in terms of a particular theory of learning. Together, they inform the design of an in situ study and the research questions it will address.
RITW is agnostic about the methods, technologies, or theories it uses. Accordingly, it does not necessarily follow one kind of methodology, where one design phase follows another, but combines different ones to address a problem/concern or opportunity, as deemed fit. Sometimes, theory might be considered central and other times only marginal; sometimes, “off-the-shelf” technology is deployed and evaluated in an in situ study. Other times, the design and deployment of a novel device is the focus. In other settings, the focus of a project is how best to work alongside a community so that a democratic design process is followed.
The multiple decisions that have to be made when operationalizing a problem are often the main drivers, shaping how the proposed research will address identified questions, what methods/technologies to use and what can be learned. In summary, RITW is broadly conceived, accommodating a diversity of methodologies, epistemologies and ways of doing research. What is common to all RITW projects is the importance placed on the setting and context, conducting research in the everyday and in naturalistic environments.
1.2 HOW DOES RESEARCH IN THE WILD DIFFER FROM LAB EXPERIMENTS?
A long-standing debate in HCI is concerned with what is lost and gained when moving research out of a controlled lab setting into the wild (Preece et al., 2015). An obvious benefit is more ecological validity—an in situ study is likely to reveal more the kinds of problems and behaviors people will have and adopt if they were to use a novel device at home, at work, or elsewhere. A lab study is less likely to show these aspects as participants try to work out what to do in order to complete the tasks set for them, by following instructions given. They may find themselves having to deal with various “demand characteristics”—the cues that make them aware of what the experimenter expects to find, wants to happen or how they are expected to behave. As such, ecological validity of lab studies can be less reliable, as participants perform to conform to the experimenter’s expectations.
A downside of evaluating technology in situ, however, is the researcher losing control over how it will be used or interacted with. Tasks can be set in a lab and predictions made to investigate systematically how participants manage to do them, when using a novel device, system, or app. When in the wild, however, participants are typically given a device to use without any set tasks provided. They may be told what it can do and given instructions on how to use it but the purpose of evaluating it in a naturalistic setting is to explore what happens when they try to use it in this context—where there may be other demands and factors at play. However, this can often mean that only a fraction of the full range of functionality, that has been designed as part of the technology, is used or explored, making it difficult for the researchers to see whether what has been designed is useful, usable, or capable of supporting the intended interactions.
To examine how much is lost and gained, Kjeldskov et al. (2004) conducted a comparative study of a mobile system designed for nurses in the lab vs. in the wild. They found that both settings revealed similar kinds of usability problems but that more were discovered in the lab than in the wild study. However, the cost of running a study in the wild was considerably greater than in the lab, leading them to question “Was it worth the hassle?” They suggest that in the wild studies might be better suited for obtaining initial insights for how to design a new system that can then feed into the requirements gathering process, while early usability testing of a prototype system can be done in the confines of the lab. This pragmatic approach to usability testing and requirements gathering makes good sense when considering how best to develop and progress a new system design. In a follow-up survey of research on mobile HCI using lab and in the wild studies, Kjeldskov and Skov (2014) concluded that it is not a matter of one being better than the other but when best to conduct a lab study vs. an in the wild study. Furthermore, they conclude that when researchers go into the wild they should “go all the way” and not settle for some “half-tame” setting. Only by carrying out truly wild studies can researchers experience and understand real-world use.
Findings from other RITW user studies have shown how they can reveal a lot more than identifying usability problems (Hornecker and Nicol, 2012). In particular, they enable researchers to explore how a range of factors can influence user behavior in situ—in terms of how people notice, approach, and decide what to do with a technology intervention—either one they are given to try or one they come across—that goes beyond the scope of what is typically able to be observed in a lab-based study. Rogers et al. (2007) found marked differences in usability and usefulness when comparing a mobile device in the wild and in the lab; the mobile device was developed to enable groups of students to carry out environmental science, as part of a long-term project investigating ecological restoration of urban regions. The device provided interactive software that allowed a user to record and look up relevant data, information visualizations, and statistics. The device was intended to replace the existing practice of using a paper-based method of recording measurements of tree growth when in the field. Placing the new mobile device in the palms of students on a cold spring day revealed a whole host of unexpected, context-based usability and user experience problems. Placing the device in the palms of students on a hot summer day revealed a quite different set of unexpected, context-based usability and user experience problems. The device was used quite differently for the different times of year, where foliage and other environmental cues vary and affect the extent to which a tree can be found and identified.
Other studies have also found how people will often approach and use prototypes differently in the wild compared with in a lab setting (e.g., Brown et al., 2011; Peltonen et al., 2008; van der Linden et al., 2011). People are often inventive and creative in what they do when coming across a prototype or system, but also can get frustrated or confused, in ways that are difficult to predict or expect from lab-based studies (Marshall et al., 2011). Van der Linden et al. (2011) also observed different behaviors—not evident from their lab-based studies—when investigating how haptic technology could improve children’s learning to play the violin at school. An in situ study of their Music-Jacket system showed how real-time vibrotactile feedback was most effective when matched to tasks selected by their teachers to be at the right level of difficulty—rather than what the researchers thought would be right for them. Similarly, Gallacher et al. (2015) discovered quite different findings when they ran the same in the wild study in different places. Based on the differing outcomes from lab studies and in in the wild approaches, Rogers et al. (2013) questioned whether findings from controlled settings can transfer to real-world settings.
In summary, in situ studies can provide new ways of thinking about how to scope and conduct research. Compared with running experiments and usability studies, where researchers try to predict in advance performance and the likelihood or kind of usability errors, running in situ studies nearly always provide unexpected findings about what humans might or might not do when confronted with a new technology intervention. Even when experiments are run in the wild, non-significant findings can be most informative. Part of the appeal of RITW is uncovering the unexpected rather than confirming what is hoped for or already known.
1.3 A FRAMEWORK FOR HCI RESEARCH IN THE WILD
RITW is eclectic in what it does and what it seeks to understand. Such an unstructured approach to research might seem unwieldy, lacking the rigor and commitment usually associated with a given epistemology. However, this broad church stance does not mean sloppiness or lowering of standards; rather, it can open up new possibilities for conducting far-reaching, impactful, and innovative research. To help frame RITW we have developed a generic framework. Figure 1.1 depicts RITW in terms of four core bases that connect to each other. These are regarded as starting places from which to scope and operationalize the research, in terms of:
1. technology,
2. design,
3. in situ studies, and
4. theory.
Each can inform the others to situate, shape, and progress the research. For example, designing a new activity (e.g., collaborative learning) can be done by working alongside others (e.g., participatory design), leading to the development of a new technology. The findings from an in situ study (e.g., how people search for information on the fly using their smartphones) can inform new theory (e.g., augmented memory). An existing theory (e.g., attention) can inform the design of a new app intended to be used to measure how people multitask in their everyday lives when using smartphones, tablets, and laptops. The design of a new technology (e.g., augmented reality) can be used to enhance a social activity in the wild (e.g., how families learn about the ecology of woodlands together). It should be stressed, however, that the RITW framework is not meant to be prescriptive, in terms of which base to start from, or what methods and analytic lens to use, when conducting research. The selection of these depends on the motivation for the research, its scoping, the available funding and resources, and expected outcomes.
Figure 1.1: Research in the wild (RITW) framework.
1.4 SCOPING RESEARCH IN THE WILD
There are many ways of conducting research in the wild. An initial challenge is to scope the research to determine what can be realistically discovered or demonstrated, which methods to use to achieve this and what to expect when using them. Sometimes, it might involve deploying hundreds of prototypes in people’s homes (e.g., Gaver et al., 2016) to observe the varied adoptions and appropriations of many people rather than those of a few. Other times, it entails months of community-building and stakeholder engagement in order to build up trust and commitment before studying the outcome of an intervention they propose or a disruption on behavior (e.g., changing habits to enable communities to reduce their energy or increase their exercise). In other contexts, it can involve running a longitudinal study across geographical boundaries to determine how new tools encourage participation in different cultures, such as citizen science projects. The scoping will depend a lot on practical concerns, such as how much funding is available, the time of year, logistics and gaining the trust of and acceptance in a community in order to get people on board to see the potential value of a proposed technology.
A number of methods are typically used in RITW, including observation, surveys, remote logging of people’s use of technology (e.g., monitoring their activity), and engagement with community members in a variety of contexts through the use of focus groups, co-design sessions, and town hall meetings—in order to hear their opinions and let them voice their concerns. Data that is collected using different methods is typically aggregated to provide a combination of quantitative and qualitative results. However, collecting multiple streams of data over several months can quickly multiply the outputs, making it difficult to tease out what might be causing particular effects or why people behave (or not) in certain ways. Much skill is involved in making sense of the different kinds of data without jumping to conclusions. There may be many factors and interdependencies at play that might be causing the observed effects or observed phenomena.
Despite this increase in uncertainty and lack of control, what is discovered and interpreted from RITW can be most revealing about what happens in the real world (Rogers et al., 2007; Marshall et al., 2011; Hornecker and Nicol, 2012). A benefit of RITW is greater ecological validity compared with extrapolating results from lab studies. Most significantly, RITW studies can show how people understand and appropriate technologies in their own terms and for their own situated purposes. Accordingly, RITW is increasingly being used to show ‘impact’ in terms of how new interventions have made a difference to a community (e.g., Balestrini et al., 2017), or how in the wild findings can provide empirical evidence for changing behavior or policy in society.
Thought Box: Beyond the Interface
Even though many of us still struggle to get the proverbial photocopier to copy (indeed our computer science department was offering tutorials to all staff, from professors to Ph.D. students, earlier this year with the arrival of a new machine), the pressing problems HCI researchers are increasingly concerned with are how people interact with an ecology of interfaces. A core challenge is to enable people to be able to switch between multiple interfaces and multiple devices. This framing requires understanding the context for why and how someone moves between them. Rather than being concerned with how best to support X (where X might be learning, working, socializing) using an individual device (e.g., a laptop, tablet or smartphone) it is necessary to work out how to design across platforms so that people can fluidly use multiple tools and devices, as they go about their everyday lives—picking up one, putting another down, or using several together in unison, by themselves or when interacting with others (Coughlan et al., 2012). What might seem obvious to do in a lab setting may not be obvious and may even be counter-intuitive in a real-world setting. A question this raises is how to frame, and which methods to use, when researching such multi-device settings across time and place in the wild?
1.5 AIM OF THE BOOK
The aim of this book is to provide an overview of HCI research in the wild, illustrating how it can traverse theory, design, technology, and in situ studies. It covers the motivations, concerns, methods and outcomes. As part of this endeavor, it addresses the challenges of conducting RITW, including the questions asked, the expectations, the trade-offs, the uncertainties, the form of analyses adopted, the role of the researcher, and their conduct when in the wild settings.
The book is targeted at both students and researchers who are new to the field of HCI and more generally, research methods, or for someone who simply wants to learn more about research in the wild. It covers RITW by charting and critiquing the what, when, where, why and how questions. In subsequent parts of the book, it examines the tools, methods, and platforms that have been imported, adapted and developed to study user-interactions in the wild, and how researchers have grounded concerns, problems, and new opportunities through their framing. It also outlines the benefits, limitations, impacts, and advances that have resulted from research in the wild.
1.6 SUMMARY
One of the motivations for conducting research in the wild is to demonstrate how a technology intervention can engage a community in a participatory manner. Underlying motivations include enabling people to collaborate, connect with each other or join forces in order to raise awareness, and act upon an issue. Another rationale for conducting RITW is to deploy novel technologies in a setting in order to provoke a response (e.g., getting people to comment on a new display in a street), a new kind of interaction (exploring how one looks in an augmented public mirror) or social engagement (e.g., encouraging strangers to talk with one another in a public place). A further reason is to develop new understandings and theorizing about how people use technology in their everyday lives—based on the body of empirical work that demonstrates how behavior differs or is the same as when using “older” and other kinds of technologies. In summary, RITW is becoming more widely accepted as a de facto way of conducting research for HCI, complimenting but also questioning the validity of traditional lab-based research approaches.