Читать книгу Qualitative HCI Research - Ann Blandford - Страница 12

Оглавление

CHAPTER 2

Planning a Study

THE DIRECTOR’S WORK


The art of devising any study is to match up what you are trying to achieve with the methods and resources at your disposal. While the film director may have a fairly blank canvas to work with, HCI is often about addressing pressing, practical problems or understanding future user needs. So a good place to start is with the purpose of a study.

Incidentally, most texts on qualitative methods do not start with the purpose: they typically start with a method, and then summarise (or leave the reader to infer) what that method is suitable for. We are taking a purpose-focused approach. From this perspective, the choice and application of an approach or technique are not right or wrong, but they are more or less well suited to the purpose of the study, and the aim is to select and adapt methods to be as good as possible for addressing that purpose. In Tables 2.1 and 2.2, we summarise some of the key features of the techniques and approaches covered in this book (see Chapters 4 and 6).

HCI is often problem-focused, delivering socio-technical solutions to identified user needs. Within this, there are two obvious roles for Semi-Structured Qualitative Studies (SSQSs): understanding current needs and practices and evaluating the effects of new technologies in practice. The typical interest is in how to understand the world in terms that are useful for interaction design. This can often demand a “bricolage” approach to study design, adopting and adapting methods to fit the constraints of a particular situation. On the one hand this makes it possible to address the most pressing problems or questions; on the other, the researcher is continually having to learn new skills, and be open to new possibilities. Experience with qualitative projects and techniques will bring a maturity that will make these possibilities and adaptations easier to handle.

Table 2.1: Key features of techniques


Table 2.2: Key features of approaches


2.1 SO, YOU’VE GOT THIS GREAT IDEA OR BURNING QUESTION…

Every study has a purpose. As noted already, within HCI there are two main roles for qualitative studies: the first starts by trying to understand people’s needs and the context within which a future technology might be used; and the second starts by assessing how well an existing technology is working and the effect that it is having on the people and the context. There are three common areas to focus on in HCI studies, as summarised below (see also Figure 2.1).

1. How people exploit technologies to support cognition (e.g., Hutchins 1995; Attfield and Blandford, 2011), or developing theories of emotion, cognition and interaction to inform design (e.g., McCarthy and Wright, 2005; Schneider et al., 2016).

2. How a particular kind of technology shapes people’s experiences (e.g., Palen, 1999; Kindberg et al., 2005). This includes ways in which a new product changes attitudes and behaviours and how the design of the product might be adapted to better support people’s needs and aspirations.

3. The nature of particular “work” (where “work” might be a leisure activity, paid work, home work or voluntary work), and how interactive technologies support or fail to support that work (e.g., Hartswood et al., 2003; Hughes et al., 1994; Mentis et al., 2013).

Figure 2.1: People use technology to achieve “work” (broadly conceived). The focus of HCI studies might be on or between any of these components.

Some (e.g., Crabtree et al., 2009) argue that the only purpose of an ethnographic study in HCI is to inform system design. Others (e.g., Dourish, 2006) argue that designers need a rich understanding of the situation for which they are designing, and that one of the important roles for ethnography is to expose and describe that context for design, without necessarily making the explicit link to implications for design. The best designs are usually ones where the design team has a rich understanding of the intended users of their products. We are often reminded of the power of intuitive design (e.g., Moggridge, 2007), but when the design team cannot have good intuitions about their users, they need other means to put themselves in the user’s shoes. Rich qualitative studies describing people, technology and work have a valuable role to play in HCI: in particular, for the design and evaluation of technology, agenda setting, theory creation and critique of predominant design paradigms.


Figure 2.2: Planning and preparation is of paramount importance to ensure that decisions about direction, sampling, editing, etc., result in a coherent and achievable project.

2.2 PLANNING AND PREPARATION

One way to think about the planning of a study is in terms of the PRET A Rapporter (PRETAR) framework (Blandford et al., 2008a). This is a basic structure for designing, conducting and reporting studies:

Purpose: every study has a purpose, which may be more or less precisely defined; methods should be selected to address the purpose of the study. The purpose of a study may change as understanding develops, but few people are able to conduct an effective study without some idea of why they are doing it.

Resources and constraints: all studies must be conducted with the available resources, also taking account of existing constraints that may limit what is possible.

Ethical considerations often shape what is possible, particularly in terms of how data can be gathered and results reported.

Techniques for data gathering need to be determined (working with the available resources to address the purpose of the study).

Analysis techniques need to be appropriate to the data and the purpose of the study.

Reporting needs to address the purpose of the study, and communicate it effectively to the intended audiences. In some cases, this will include an account of how and why the purpose has evolved, as well as the methods, results, etc.

To tackle a project competently you will need to build up relevant expertise in qualitative research and in the study domain. There is no shortcut to acquiring that expertise. Courses, textbooks and research papers provide essential foundations, and different resources resonate with (and are therefore most useful to) different people. Corbin and Strauss (2015) emphasise the importance of planning and practice: “Persons sometimes think that they can go out into the field and conduct interviews or observations with no training or preparation. Often these persons are disappointed when the data they are able to gather are sparse” (p. 37). Kidder and Fine (1987) describe the evolving focus of qualitative research: that one of the researcher’s frequent tasks is “deciding which question to ask next of whom” (p. 60). There is no substitute for planning, practice and reflecting on what can be learnt from each interview or observation session.

It is tempting to want to apply a precisely defined method (Yardley, 2000). But, in all probability, you will be faced by complexity that demands some improvisation along the way (Furniss et al., 2011a; Woolrych et al., 2011). We provide a series of checklists to help focus on particular decisions when designing, conducting and reporting a study.

As well as expertise in qualitative methods, the level of expertise in the study context can have a huge influence over the quality and kind of study conducted. When the study focuses on a widely used technology or an activity that most people engage in, such as time management (e.g., Kamsin et al., 2012) or in-car navigation (e.g., Curzon et al., 2002), any disparity in expertise between researcher and participants is unlikely to be critical. Where the study is of a highly specialised device, or in a specialist context, the expertise of the researcher(s) can have a significant effect on both the conduct and the outcomes of a study. At times, naiveté can be an asset, allowing one to ask simple but important questions that would be overlooked by someone with more domain expertise. At other times, naiveté can result in the researcher failing to note or interpret important features of the study context. In preparing to conduct a study, it is important to consider the effects of expertise and to determine whether or not specific training in the technology or work being studied is required before data-gathering starts.

Rather than trying to anticipate every possible eventuality, it is often best to do enough preparation, where what constitutes enough is likely to vary from one individual to another as well as from one study question to another. So, as a starting point, we summarise an idealised shape of a qualitative study (Figure 2.3): you start with a purpose (a research question), then you gather and analyse data, to yield results that are then reported (in a dissertation, paper or client report); the study is shaped by various factors, including the expertise of the research team (discussed above), resources and constraints, the role of theory and ethical considerations (all discussed below).

Figure 2.3: An idealised shape of a qualitative study.

Although we first present steps sequentially and simply, you should be aware that this is an over-simplification: it is hardly ever possible to separate the components of a study and treat them independently. The style of data gathering influences what analysis can be performed; the relationship established with early participants may influence the recruitment of later participants; ethical considerations may influence what kinds of data can be gathered, etc. Managing these interdependencies can make qualitative research particularly challenging at times, but successfully juggling and trading them off also makes qualitative research interesting and rewarding. We return to this topic of interdependencies later.

2.3 BEING REALISTIC: RESOURCES AND CONSTRAINTS

Every study has to be designed to work with the available resources. Where resources are limited it is necessary to “cut your coat according to your cloth.” For example, if you have three months to conduct a Master’s project you will need to fit ambitions, and hence purpose, to what is possible with the available resources. Here are some things to consider when thinking about the time involved for a qualitative study:

Time to obtain ethical clearance will depend on how sensitive the study is and which review board is assessing it; you can often get local knowledge to help you plan this.

Time to recruit participants also depends on their situations and how interesting the topic is to them. Recruiting through a general subject pool can often be quick, but if you are seeking participants with specialist skills or knowledge, you should factor in significant time for this.

• The mean duration of an interview is under an hour, depending on the scope of the interviews. Few interviews are much longer than that because attention drifts. Observations can be longer (several hours per session with comfort breaks).

Transcribing audio data typically takes 4–6 times as long as the recording, depending on data quality, lengths of silences and the transcriber’s typing speed. Transcribing video data takes significantly longer, depending on the level of detail being transcribed.

Analysis time can vary, depending on the quality of the data and the depth and focus of analysis, but is likely to take at least 2–3 days per hour of data.

In total, a Master’s dissertation of three months (typical in the U.K.) is likely to involve 10–15 hours of audio data, or equivalent. That does not sound like much, but is usually all that is feasible when all the other stages of the project (including literature review and writing up) are taken into account. It is therefore important that the data should be as high quality as possible.

As well as time, resource considerations need to cover funding, equipment available for data collection and analysis, availability of places to conduct the study, availability of participants and expertise. Here, we briefly discuss some of these issues, while avoiding stating the obvious (variants on the theme of “don’t plan to use resources that you don’t have or can’t acquire!”).

Where a study takes place can shape that study significantly. Studies that take place within the context of work, home or other natural setting are sometimes referred to as “situated” or “in the wild” (e.g., Rogers, 2012). Studies that take place in more controlled settings include laboratory studies (e.g., involving think-aloud protocol) and some interview studies. There are also intermediate points, such as the use of simulation labs, or the use of spaces that are similar to the work setting, where participants have access to some, but not all, features of the natural work setting. Observational studies most commonly take place “in the wild,” where the “wild” may be a workplace, the home, or some other location where the technology of interest is used. Interview studies may take place in the “wild” or in another place that is comfortable for participants, and quiet enough to record and to ensure appropriate privacy and safety for both participant and interviewer. Of course, there are also study types where researcher and participant are at a distance from each other, such as diary studies and remote interviews.

Tools for data recording include notes, audio recording, still camera, video camera and screen capture software. All of these can be useful, depending on the situations and purpose for which data is being gathered.

Hand-written or typed notes can be most effective in noisy environments, or where there are sensitivities about any other form of recording. Care needs to be taken that the act of note-taking does not disrupt the interaction. For example, if particular actions are noted in an observation session, participants may be aware of every time a note is taken, and hence self-conscious about the activity that is provoking the note-taking (Blandford et al., 2015a).

Audio recording is often most suitable for interviews and focus groups. If you are working on your own it might be difficult to follow and facilitate the interview and note down all the important points otherwise. Audio recording and transcription is also needed where the details of specific words and phrases people use are important. Audio recordings are preferable to note-taking particularly when the study is exploratory and there is a chance that information that might be overlooked early on turns out to be important later, or if the data is rich enough to support multiple analyses. For example, Rajkomar et al. (2015) originally gathered data on people’s situated use of home haemodialysis technology in order to test and extend the DiCoT approach (Furniss and Blandford, 2006) to analysing a system in terms of Distributed Cognition (DCog: Hollan et al., 2000). Within the initial interview plan, we intentionally also addressed questions of basic usability and how people stay safe on home haemodialysis (Rajkomar et al., 2014). Another unanticipated theme within the data was how people cope with managing their own dialysis at home including, but not limited to, how they troubleshoot when the technology goes wrong (Blandford et al., 2015b). It would not have been possible for us to do this follow-up Thematic Analysis without full audio transcriptions of the interviews.

Still photographs of activities performed and equipment/technology used provide a permanent record to support analysis and for illustrative purposes in reports. This can be particularly useful when the equipment has been adapted by users, or for recording where technology was used or how it was configured. For example, Figure 2.4 shows a series of photos of glucometers used in a hospital that supported analysis of the system in terms of DCog (Furniss et al., 2015).

Figure 2.4: Glucometer use in a hospital. The same device is shown stand-alone (left), as part of a blood glucose testing kit (middle) and as part of a broader blood glucose testing system (right).

Video recording can be valuable for capturing the details of an interaction, but can be intrusive. Recording video can be particularly useful for capturing micro-interactions and interaction that involves the use of equipment or technology in a particular physical space (e.g., in a family car—see Cycil et al., 2014) or involves multiple users interacting with technology (see Marshall et al., 2011).

Screen capture software can give a valuable record of user interactions with desktop systems. For capturing rarely performed interactions, or interactions over an extended time period (e.g., how a document is written over a period of days or weeks), it may be possible to ask participants to record their own screens or to take screenshots (e.g., Karlson et al., 2010).

Particular qualitative methods may require specialist equipment for data gathering. Examples include the use of cultural probes (Gaver and Dunne, 1999), which involve participants receiving a set of tools such as cameras, notebooks, pens and sticky notes with which to record their experiences, or engaging participants in keeping video diaries. Other specialist tools may sometimes add value; for example, eye gaze tracking, motion capture or activity tracking may add useful quantitative data to complement the qualitative in some studies (see discussion of mixed methods in Chapter 6).

When it comes to data analysis, colored pencils, highlighter pens and paper are often adequate for studies that involve only a few hours of data. For larger studies, computer-based Qualitative Data Analysis tools (e.g., NVivo, MaxQDA, Dedoose or ATLAS.ti) can help with managing and keeping track of data, but require time to learn to use effectively. These tools can help track large quantities of quotations, codes, links and memos. They can also speed up the process of analysis; for example, they allow you to rapidly change the name of every instance of a particular code, or list every quotation with a particular code. However, they do not actually do any of the sense making themselves—that is left to the researcher.

As well as the costs of equipment, the other main costs for studies are typically the costs of travel and participant fees. Within HCI, there has been little discussion around the ethics and practicality of paying participant fees for studies. In disciplines where this has been studied (most notably medicine), there is little agreement on policy for paying participants (e.g., Grady et al., 2005; Fry et al., 2005). The ethical concerns in medicine are typically much greater than those in HCI due to the level of potential harm. In HCI, it is common practice to recompense participants for their time and any costs they incur, with cash or gift certificates, without making the payment so large that people are likely to participate just for the money.

Often, the biggest constraint is access to a study setting or availability of suitable participants; we devote the next chapter to this topic.

2.4 ETHICS AND INFORMED CONSENT

Traditionally, ethics has been concerned with the avoidance of harm, and most established ethical clearance processes focus on this. “VIP” is a useful mnemonic for the main considerations:

Vulnerable participants

Informed consent

Privacy and confidentiality

Particular care needs to be taken when recruiting participants from groups that might be regarded as vulnerable, such as children, the elderly or people with a particular condition (illness, addiction, etc.).

In providing informed consent, participants should be told the purpose of the study, and made aware of their right to withdraw at any time without reason and without them being at any disadvantage. If it is not possible to inform participants of the full purpose of the study at the outset (e.g., because this might bias their behaviour and defeat the object of the study), then they should be debriefed fully at the end of the study.

It is common practice to provide a written information sheet outlining the purpose of the study, what is expected of participants, how their data will be stored, used and, if applicable, shared and how findings will be reported. Depending on the circumstances, it may be appropriate to gather either written or verbal consent; if written then the record should be kept securely, and separately from data. Preece et al. (2015) suggest that requiring participants to sign an informed consent form helps to keep the relationship between researcher and participants “clear and professional.” This is true in some situations, but not in others, where verbal consent may be less disruptive for participants. For example, verbal consent may work better if observing someone briefly while they go about their work, if getting written consent would disrupt the work disproportionately.

With the growing use of social media, and of research methods making use of such data (e.g., from Twitter or online forums), there are situations where gathering informed consent is impractical or maybe even impossible. In such situations, it is important to weigh up the value of the research and how to ensure that confidentiality and respect are maintained. Bear in mind that although such data has been made publicly available, the authors may not have considered all possible uses of the data and may feel a strong sense of ownership of it. If in doubt, discuss possible ethical concerns with experts in research ethics.

Privacy and confidentiality should be respected in data gathering, management and reporting. Some of this is covered in data protection laws and information governance procedures. It is good practice to anonymise data as soon as is practical, i.e., when taking notes or transcribing audio. This means replacing people’s names with a participant number (e.g., “P3”) or pseudonym, and removing other proper nouns that have the potential to personally identify participants (e.g., company names, specific places, such as the name of a small town, etc.). It may be necessary to retain contact details securely so that it is possible to inform participants of the outcome of the study later, but this would normally only be done with informed consent, for participants who want to know more.

Ethics goes beyond the principle of no harm: it should also be about doing good. There must be some value in the research, otherwise it is not worth doing. This might require a long-term perspective: understanding current design and user experiences to guide the design of future technologies. That long-term view may not give research participants immediate pay-back, but where possible there should be benefits to participating in a study. In our experience, participants have responded positively to us explaining that findings from their study will not be used to inform the design of the technology they actually use, but with the aim of making this sort of technology easier to use for people in the future.

It is important to review the safety of the researcher as well as that of participants. This commonly involves doing a risk analysis. For example, researchers should meet participants who are not already known to them in public spaces wherever possible. For home studies, it is generally good practice to work in pairs, or to consider other ways of mitigating any risks.

2.5 ACCOMMODATING RESEARCHER BIASES AND PREEXISTING THEORY WHEN PLANNING A STUDY

In addition to resources, constraints and ethical considerations, there are various less tangible factors that shape any study. Probably the most important are the ways that pre-existing theory can be used to inform data gathering, analysis and reporting of a study, and also the biases, understanding, and experience of the researcher(s) involved in the project (Denzin and Lincoln, 2011).

No researcher is a tabula rasa: each comes to a study with pre-existing understanding, experience, interests, etc. Hertzum and Jacobsen (2001) studied how several analysts independently identified usability difficulties from the same video data in which other participants had been thinking aloud while interacting with a user interface. There was significant variability in what issues their participating analysts identified. They considered this to be “chilling”: that there is no objective, shared understanding, even with an activity as superficially simple as identifying usability difficulties from think-aloud data. If this is true for analysing pre-determined data with a pre-defined question, it clearly has an even greater effect when the researcher is shaping the entire study.

For the individual, it may be difficult to identify or articulate many of the factors that shape the research they conduct, but one obvious factor is the role of theory in a study. Theory may shape the research from the outset, come into play during the analysis, or be most prominent towards the end of a research project. In Chapter 6, we discuss how theory may be introduced in an analysis, and how it can contribute to the generalisability of findings. Here, we focus on how it may be used to shape a study at the planning stage.

Theory may be introduced early into a study: either to test an existing theory in a new context or to better understand the study context while having a focus that helps to manage its complexity. A theory can act as a “lens,” providing sensitising concepts that help to shape and focus data gathering and impose a partial structure on the data that is gathered. Similarly, a theory can help in shaping analysis.

Where this is done, it is important not to trust an existing theoretical framework unquestioningly, but to test and extend that framework: are there counter-examples that challenge the accuracy of the existing framework? Are there examples that go beyond the framework and introduce important extensions to it? Many studies that introduce theory early end up extending or refining the theory and also making the study more manageable. For example, when studying the interactive behaviour of lawyers when looking for information on the Web (Makri et al., 2008a), we shaped our approach to data gathering and analysis around the work of Ellis et al. (1993) and Ellis and Haugan (1997). While this was not our intention at the beginning of the study, as our study evolved we noticed that many of the interactive behaviours the lawyers displayed were highly similar to those identified by Ellis and colleagues in other disciplines (and when using electronic library catalogues rather than the Web). Later data gathering and analysis focused on Ellis’s model. However, rather than assume that all of Ellis and colleagues’ findings applied in this new context, we questioned their total fit. This resulted in the existing theory being enriched by both extending and refining previous findings. A different example of contributing to theory arose from our attempts to apply DCog to analyse a control room. DCog is a theoretical perspective that views cognition as being distributed in the world, rather than residing solely in the mind, recognising the role of artefacts and information flow in supporting cognition. We found the theory lacked a suitable method to apply it, so we developed a method called DiCoT (Distributed Cognition for Teamwork) to fill this gap (Furniss and Blandford, 2006). Sometimes contributions to theory and method can be greater than the insights for the context under study.


Figure 2.5: A view of a control room with shared information artefacts that shaped the development of DiCoT (Furniss and Blandford, 2006).

2.6 SUMMARY AND CHECKLIST: PLANNING A STUDY

Just as the director of a documentary film is driven by their vision and has to plan what and where to film within their constraints before starting, you have to think about your study’s purpose and plan before you start to gather data. You might review relevant literature and do a pilot study early on to check your study design or to shape your approach. You might consult with a specialist user group to check your plans are feasible. You might need to review the focus of your study or approach as a result. But without a plan, a study is unlikely to be robust or deliver useful outcomes. There comes a point when you simply have to head off and explore, because if you knew ahead of time what you were going to find in the study, there would be no point in doing it. But it is wise to know broadly what you want to study and how before you begin. It is also wise to write up the study method ahead of time, to capture what you propose to do; this “methods” section can be reviewed and revised later if you discover that what you had intended was not in practice feasible or appropriate.

Checklist A summarises issues that need to be considered early on. You should also be mindful of quality considerations (discussed in Chapter 8) from the outset, to make sure that you conduct and report the best possible study.

Checklist A: Planning a SSQS
Purpose What is the purpose of the study? Why is it an important study to conduct? What gap in knowledge is it filling?
Resources and constraints What resources do you have to work with? What constraints limit possibilities? What training and preparation does each researcher need? What expertise does the researcher bring to the project? Do you need advocate(s) within the study setting? How will you identify and work with them? See Chapter 3. What is the approach to sampling participants? How will participants be recruited? See Chapter 3. Where will the study take place? To what extent, and how, will theory play a role in data gathering, analysis and/or reporting?
Ethical considerations Are there important ethical considerations that need to be addressed (e.g., vulnerable participants)? How will you ensure that participants benefit as far as possible from participation? What will participants be told about the study when giving informed consent? How will participants be debriefed about the study once it is completed? How will data be stored and anonymised? How will participants’ engagement be reported? If participants read the report, will they feel well represented or is there a risk that they might feel used or misled? See Chapter 7. Have you considered your own safety and health and made sure that this is addressed well (e.g., considering the risks of lone working)?
Techniques for data gathering How will data be gathered (interviews, observation, etc.)? How will it be recorded? If multiple methods are to be used, how will they be sequenced and coordinated? How interleaved will participant recruitment, data gathering and analysis be?
Analysis of data How will data be analysed? How will the analysis be validated or will quality be ensured/assessed?
Reporting Who is the audience? How will findings be reported?
Qualitative HCI Research

Подняться наверх