Читать книгу Remote Research - Tony Tulathimutte - Страница 4

Оглавление

Chapter 2

Moderated Research: Setup

Gearing Up: Physical Equipment

Doing a Pilot Test Right Now

Preparing for a Real Study

Drafting the Research Documents

Chapter Summary

Let’s get down to setting up a typical moderated one-on-one study. We’ll walk you through gathering all the equipment and software you’ll need and explain how to prepare for your first research session. To keep things moving, we’ll stick to bare-bones basics in this chapter, explaining the simplest way to set up a generic moderated research session. Later, you’ll learn about other tools, approaches, and strategies you can use to develop a study that best suits your particularneeds.

Gearing Up: Physical Equipment

Even though you don’t need a lab to do remote research, you’ll still need some equipment to make calls, see your users’ screens, and record the sessions, and there are also a few tools that can make your life easier. Fortunately, you can find a lot of these lying around most offices (see Figure 2.1).


Figure 2.1 http://www.flickr.com/photos/rosenfeldmedia/4219596492/

A standard remote lab setup: a two-line desk phone, laptop with wired Internet connection, a second monitor, and a phone headset. You may also need a phone tap and amplifier to record audio, depending on your recording setup.

 A computer. Laptop or desktop—doesn’t matter. Just be sure that the system specs exceed the minimum requirements of all the screen sharing, chat, recording, and note-taking software you’ll be running. And be especially sure that your computer is compatible with the screen sharing solution you choose; lots of screen sharing solutions are currently PC-only, although this will probably change. And be doubly sure to have plenty of hard drive space free.

 A wired, high-speed Internet connection. Wireless connections are too unreliable to run screen sharing software with 100% confidence. It’s very important to have an uninterrupted wired connection—the faster, the better.

 A landline, two-line, touchtone desk phone. It has to be a landline desk phone, for a few reasons: batteries won’t last across multiple testing sessions, a phone tap (if you need one for recording) won’t work with a mobile or cordless phone, and most importantly, it’s more stable and reliable. And it has to be a two-line phone if your setup requires you to conference call with both your user and your observers. (Alternatively, you could use a VoIP service like Skype if you’re prepared to deal with the spottiness and instability of the typical Skype call, circa 2009.)

 Headset for the phone. Your neck will thank you.

 Two monitors. The screen sharing window alone takes up nearly an entire monitor, so if you actually want to be able to see your notes, chat windows, or anything else on your computer, you’ll need the screen space.

 Phone recording adaptor or a phone tap and amplifier. You’ll need these tools to record the phone conversation if you’re speaking through your landline telephone (and not a VoIP service).

 Peace and quiet. It’s crucial to test in a place where you can talk freely at a normal volume and won’t be disrupted, like an empty office or meeting room. Background noise can be unbelievably distracting to both the moderator and participant.

Doing a Pilot Test Right Now

So now that you’ve got your equipment together, why wait? Let’s do a simple 10-minute pilot test to get your feet wet. This pilot test will simulate a basic moderated session, not including the recruiting process (which is described in the next chapter).

The crucial piece of software you’ll need is a screen sharing application, which will allow you to see what’s on your user’s computer screen during the session. There are many screen sharing options (described at length in Chapter 8, “Remote Research Tools”), but for now we’ll stick with Adobe’s Acrobat Connect. It supports observation and chat, as well as webcam sharing (which we won’t get into here). Connect doesn’t require users to install anything on their computers; all they have to do is visit a Web site that you’ll direct them to, which means that you can get around most firewalls, antivirus software, and other barriers that might prevent you from running the screen sharing. It’s compatible with all OS platforms, so you, your observers, and your participant can be on a Mac, Linux, or PC. And finally, it comes with an optional conference-calling service that you can use to have the study participants and observers on the same line.

Now, you’ll need a pilot participant. Grab anyone at all—your coworker, sister-in-law, high school lacrosse coach—and just tell him/her in advance to be waiting near a phone and a computer with high-speed Internet access.

OK, time to get connected with your participant. For brevity’s sake, we’ll assume you can follow onscreen instructions:

1 Sign up for a free trial of Adobe Connect at the Adobe Web site (google “Adobe Connect”) and follow the instructions there to begin a session. You should end up at the Connect session window (see Figure 2.2). The first time you use Acrobat Connect, click the Share My Screen button. That will trigger the download of a little plug-in that will allow you to use the screen sharing function. (This is the only time you’ll have to do it; after that, it should always launch using the plug-in, rather than a tab of your Web browser.)

2 Call your pilot user.


Figure 2.2 http://www.flickr.com/photos/rosenfeldmedia/4218829603/

The Adobe Acrobat Connect session window: the big window is the user’s screen, and the smaller windows, from top to bottom, are the user’s webcam (optional), the participant list (including observers), chat, and notes.

1 Set up your user’s screen sharing. Tell your user to start the screen sharing session by going to connect.acrobat.com/XXXXX (again, XXXXX = your Connect account name) and join as a guest. (If you’re tech-savvy, you can shorten the URL using http://tinyurl.com or set up an easy-to-type redirect link in advance and tell your user to go there. When you’re reading a URL over the phone, it’s easy for listeners to get the letters confused, so the shorter, the better. It’s usually faster to do a careful “T-as-in-Tom, O-as-in-Orange” spelling to avoid errors.) Tell the user to click on the arrow in the upper-right corner of the Connect window to download the screen sharing plug-in and then tell him/her to click on the Share My Screen button that appears. You should be able to see the user’s screen in your session window now.

2 Begin the study! It’s just a pilot test, so do whatever you like: have the user show you how he/she uses your interface, watch him/her fill out a survey…anything. When you’re done, click the End Session button in the session window, and the screen sharing will end. No uninstalling is necessary for Adobe Connect. Here, you should practice the things you say to the participant when wrapping up the study, the most important being: “Thank you so much for participating, I really appreciate it,” and “I can no longer see your screen and will not be able to do so again.” We’ll cover more of these kinds of details later in the chapter. For now, you’re all done.

Preparing for a Real Study

So now you know what you need for a basic study: computer, high-speed Internet, phone, and some screen sharing software. Now let’s back up and talk more in detail about the function of each tool.

Screen Sharing

Usually, the hardest part of remote research is getting your users to successfully share their screen with you—that includes both obtaining their consent and getting them to set up the screen sharing tool. As with so many technical pursuits, the more attention you give to the setup ahead of time, the easier your life will be when you actually start calling people.

First, know this: about 15–30% of all the remote sessions you attempt to set up will fail for one reason or another (see Chapter10, “The Challenges of Remote Testing,” for more details on the kinds of challenges you’ll encounter). That’s just the wild world of the Internet, and this is why a lot of the articles on remote testing focus on the nuts and bolts of conducting sessions; a few years will have to pass before the technical details become less ofa pain.

Choosing the right screen sharing tool can be overwhelming when you’re not familiar with the pros and cons of each tool. Most practitioners who do lots of remote testing eventually choose one tool and stick with it, but you should try a couple before figuring out which one works for you (most offer free trials). Broadly, the most important considerations for a screen sharing tool are its compatibility and ease of setup for both the moderator and the participant. Browser and OS compatibility have a big impact on whom you’ll be able to talk to, and you probably don’t want to arbitrarily limit your recruiting pool by who can run your screen sharing software (unless the interface you’re testing happens to be platform specific, too, in which case it doesn’t matter). You want to make the setup process as quick and smooth as possible so that you don’t prolong the session or frustrate the user with tedious instructions.

For platform versatility, you can’t beat browser-based screen sharing solutions like Connect, GoToMeeting, and LiveLook, since they only require users to run a fairly recent browser, and the OS doesn’t matter. The only minor downside is that these solutions currently require users to have a Java-enabled browser; most are, but if some users don’t have it, having them set it up can be time-consuming, and they may not even want to.

The solutions that aren’t browser based are often the ones that require users to download and run an executable file or have a certain program installed (Skype, iChat). Downloading and installing files can be a security issue for users who are behind corporate firewalls, as well as a trust issue for users who don’t like the idea of installing things at the command of a voice on the other end of a telephone. If you decide to use screen sharing software that requires any heavy installation or user registration, you’ll probably want to arrange in advance for your users to dothat.

If you plan on having people observe your sessions remotely, screen sharing features may make that a lot easier. Most tools limit the number of observers; make sure that yours can support as many as you need. A handful of tools enable chatting between the observers and moderator, which is handy, but the participant should always be excluded from the chat, to keep distraction to a minimum. If it’s not possible to block the participant from the chat, you can use an instant messaging service to chat with clients behind the scenes.

On a final practical note, there’s cost to consider. Pricing structures differ from tool to tool but generally fall into a handful of categories. Adobe Connect and GoToMeeting offer subscription plans, which are best if you intend to conduct multiple usability testing sessions over the development of your interface (which we encourage), and they also offer free trials if you just plan on doing a one-off study. Tools like Skype (for international calls) charge just a few cents a minute, which is cost effective if you don’t plan on conducting usability studies regularly, but they make you buy a set number of minutes in advance. Watch out for additional charges you’ll incur if the screen sharing lacks a particular feature you need; for instance—LiveLook currently lacks support for observers to listen in on sessions, so you’d have to pony up for a conference call service, which can be pricey. And if you’re using your phone to dial internationally instead of a voice chat service like Skype, watch those long-distance charges.

For the full story on the state of remote research tools, check out Chapter 8. For now, Table 2.1 provides a quick comparison of some handy screen sharing solutions.

Table 2.1 Screen Sharing Tools at a Glance (Circa Mid-2009)*

http://www.flickr.com/photos/rosenfeldmedia/4287138344/


* See Chapter 8 for more details.

Recording

Session recordings are useful if you want to be able to document, share, and closely analyze the testing sessions after the fact (bear in mind, though, that recording is not strictly essential for the purposes of running the study). Unlike in-person testing, in remote testing you can usually capture everything important about a session with just a simple software tool. Some screen sharing tools, like GoToMeeting and UserVue, have built-in recording functionality, and some Skype plug-ins will record Skype video as well (for example, eCamm’s Call Recorder for Mac). If recording functionality isn’t built in, you’ll need a screen recording application that allows you to capture both video and audio from your computer. Techsmith’s Camtasia Studio is a common one, although there are free applications like CamStudio as well. Most recording applications produce either WMV or AVI files. Recording is probably the most computing-intensive operation on the moderator’s end, especially if you’re recording a large screen area, so be sure that whatever you use to record doesn’t noticeably slow yourcomputer.

Taking Notes

While some people are still most comfortable with handwritten notes, there are advantages to taking notes on the computer during remote testing. For one thing, you don’t have to take your eyes off the screen to take notes, and for another, most people (most UX professionals, anyway) type faster than they write. As with any user research, you’ll be furiously taking notes on what users say and do, as well as things you see on their screens incidentally. Fifty words per minute is close to the speed you’ll have to type if you want to transcribe user quotes verbatim, but don’t get distracted from what’s happening in the session; you can resort to your session recordings later, if you have to.

A good old word processing document or Excel spreadsheet should suffice for basic note taking, but if you want to make analysis easier, you can find ways to cleverly take notes that are automatically timestamped to the video recording you’re making. Timestamping your notes (i.e., keeping track of the exact time you took each note) matters only if you’re planning on making highlight videos or conducting detailed analysis after your sessions are complete. For now, we stick to basics; see Chapter 5, “Moderating,” for more about note-taking and transcription techniques and Chapter 7, “Analysis and Reporting,” for more on the analysis process.

Webcams

We don’t ordinarily use a webcam in remote usability sessions, but it’s an interesting option. It allows you to see your users’ facial reactions during the session or vice versa. A growing number of screen sharing applications have webcam functionality built into them. This, of course, would require your participants to be using a webcam, and they’d also need to have any necessary software. Adobe Connect has an integrated screen sharing/video conferencing solution, Skype has both video chat and screen sharing, and Google just recently introduced webcam chat built right into its Gmail chatclient.

As with recording, you need to make sure that any webcam software isn’t going to kill your computer performance; recording is CPU intensive, so test it in advance. And, as with any personally identifying information gathered during a remote research study, make sure that images and recordings of users are gathered with proper consent (see Chapter 4, “Privacy and Consent”) and securely stored or erased, to respect their privacy and abide by all relevant laws.

Drafting the Research Documents

As we mentioned in the introduction, this book focuses on how to do remote user research, so we assume you know how to plan and manage a standard user research study. Specifically, we’re assuming you know how to schedule the project, define the research goals and user segments, recruit users (if you’re recruiting the traditional way), and get everyone (researchers, stakeholders, observers) on the same page about what needs to be done. (If you have no idea what we’re talking about, see Chapter5 of Mike Kuniavsky’s Observing the User Experience.)

There are a few ways in which preparing for a remote study is different. First, the facilitator guide (aka “moderator script”) will contain important new parts that cover introduction, screen sharing, user consent, and incentive payment. Second, the observers need to be briefed on how to use the screen sharing service and how to communicate with the moderator. Finally, if you’re recruiting on the Web, you’ll need to design a recruiting screener and place it on your Web site in advance. We’ll get into that topic in the next chapter.

The Remote Facilitator Guide

Drafting the facilitator guide is usually the most time-consuming part of the research setup process. Note that we deliberately call it a “guide” and not a “script”; you will not be mindlessly reading verbatim from this document. The facilitator guide should always be written with flexibility in mind, anticipating the unpredictable things that happen when you do time-aware research. You’re speaking with people who are in their own native environments performing natural tasks, outside a controlled lab setting. Think of the facilitator guide as a document that establishes everything you should expect to encounter over the course of the session but shouldn’t restrict you from exploring new issues that come up.

A facilitator guide divides your study into four main sections, totaling about 40 minutes: Greeting and Introduction, Natural Tasks, Predetermined Tasks, and Wrap-up and Debrief. Since studies tend to run long, we’ve found that 60 minutes is about the upper limit for maintaining a participant’s attention and investment in the study.

Presession Setup (1 Minute)

Set up a screen sharing session and run your recording software. This method will vary depending on what screen sharing and recording tools you’re using.

Cue your observers to join the session. If you have people observing the session, cue them to join the session before contacting the user. (See “Preparing Observers for Testing” later in this chapter for more details.)

Greeting and Introduction (5 Minutes)

The first part of the guide will deal with establishing contact with the participant. This part comes closest to resembling a “script” in the traditional sense; most of what you need to say probably won’t change much from user to user. The introduction in our sample script has been refined over hundreds of sessions and is the most efficient way we’ve found to introduce a study.

We won’t be reprinting a full facilitator guide here (although you’ll be able to find examples on http://remoteusability.com), we want to point out a few of the important things all facilitator guides need to establish. Elements that will vary from study to study are highlighted in yellow, both here and throughout this book:

Contact and self-introduction. Contact users. Right away, introduce yourself with your name, the company or organization you represent, and remind users how you got their name and phone number. If you’re live recruiting (i.e., calling users who just filled out a Web recruiting form, see Chapter 3, “Recruiting for Remote Studies”), you have to do this pretty quickly. Otherwise, they might mistake you for a telemarketer.

Hello, can I speak to Bill Brown? Hi Bill. My name’s Nate, and I’m calling on behalf of ACME about the [usability study we had scheduled for this time] / [survey you filled out at ACME.com a few minutes ago].

Willingness to participate. Confirm that users are available and willing to speak for the duration of the study.

Do you still have time now to talk to me for 40 minutes about the ACME Web site? [If not, ask whether you can reschedule, and then end the call.]

Ability to participate. Confirm that users have the necessary equipment to participate in the study. You’d be surprised how many people aren’t able to talk on the phone and use their browser at the same time.

Will you be able to use Firefox and talk on your phone at the same time? And do you have a high-speed Internet connection? [If not, end the call.] Great!

Obtain consent for screen sharing and recording. This part takes some finesse, because if your users didn’t anticipate the call (which is always the case with live recruiting), they may understandably become suspicious when you, a stranger on the telephone, ask them to download something. Explain in clear, simple language what you’d like them to do, why, and what they’re getting themselves into. If you’re using a Consent Agreement (as described in Chapter 4), direct the users to the consent form here.

So, during this call today, we’d like to follow along with what’s on your screen while we’re talking to you, and to do that, we ask you to visit a Web site that will allow us to see and record whatever you can see on your desktop. The recording is used only for research, and the screen sharing is completely disabled at the end of the session. Does that sound okay? [If not, end the call.] Great!

[If using a Consent Agreement]: I’d like to direct you to a Web site with a consent form that describes what the study will be about, so you can make your participation official. The address to put in your browser’s address bar is: www.acme.com/consent.

Screen sharing setup for the participant. The details of this step depend on the tool used.

Introducing the study. Establish the users’ expectations about what will happen during the study and what kind of mindset they should have entering the study. The most important things to establish are that you want the participants to use the interface like they normally would:

So let me tell you a little bit about what we’ll be doing today. Basically, I’m going to be getting your feedback on ACME.com. Your job today is going to be really easy. Basically, I want you to just be yourself and act as you naturally would on the Web site. Now and then, I may chime in to ask you what you’re thinking, but mostly I’d just like you to show me how you use the site. If there’s a point where you’d normally quit or stop using the Web site, just let us know.

And let them know you’d also like them to think aloud while they’re on the site:

There’s just one thing that I would like you to do differently, which is to “think aloud” as you use the ACME Web site. For example, if you’re reading something, read it aloud, and feel free to comment as you read—for example “that’s interesting” or “what in the world are they talking about?” Let me know what’s going through your mind while you do things, so I can understand exactly what you are thinking and doing (for example, “Now I’m going to try to use the search engine”). If you get to a point where you would naturally leave or stop using the Web site, let me know.

If your interface is a design in progress, you should set those expectations early so that users don’t bother wasting time trying to figure out something that isn’t finished anyway. You should remind them, however, that there’s no need to modify their behavior just because it’s a prototype. Our experience is that most people can’t tell the difference between a black-and-white prototype and a real functioning application anyway.

Also, keep in mind that some things might not be working today on this prototype, which is fine. If you run into something that doesn’t work, I’ll let you know, and you can tell me what you were trying to do.

It’s also nice to set users at ease by reassuring them that you had nothing to do with the design of the interface, so they can be completely honest:

And finally, I want to let you know that I had nothing to do with the design of the Web site. I’m just collecting feedback, so feel free to be candid with your thoughts. No need to worry about hurting my feelings or getting anyone in trouble. Does that sound good? Great! Then let’s move on to the Web site.

Natural Tasks (15–25 Minutes)

If you’re conducting time-aware research, for the most part you won’t have to explicitly assign tasks. For example, if you’re testing the login process for your Web site, you can contact people just at the moment they first arrive at your homepage, when you know they haven’t yet logged in. This is what we call a “natural task”—something users were going to do anyway, whether or not they were participating in a study.

Since natural tasks have a lot of variability, this section of the script should be extremely open-ended. The moderator’s comments should be geared toward encouraging the user to do what he was intending to do with the interface, without directing the user to perform any specific tasks[1]. This is often the most important part of the testing session, but it’s actually the shortest part of the script simply because you can’t predict in advance exactly what the users will do. You may include some typical prompt questions, for your own reference, such as thefollowing:

 So, tell me what you’re looking at.

 What’s going through your mind right now?

 What do you want to do from here?

 Where would you go from here?

 Which parts of this page seem most interesting to you?

 What kind of info are you looking for on this page?

 What stands out to you most?

 What were you expecting when you first came to this page?

 When did you decide to leave the site/exit the program?

 What do you think about the way everything’s laid out?

 What brought you to this page?

 Is there any more info on this page you’d like to see included that you don’t already see here?

Predetermined Tasks (5–15 Minutes)

Since natural tasks are unpredictable, they may not always cover every last issue you’re curious about, so you’ll want to make sure you’re prepared to address any tasks, questions, or parts of the interface required by the study goals. This will require you to write down all the important questions you may have, even though you won’t necessarily be asking them all. Ideally, you want to cover as many of those questions as possible in the natural tasks section. Even though this section will appear long in the guide, you won’t be asking every question; therefore, it shouldn’t actually take as much time as the natural tasks.

Wrap-up and Debrief (3–5 Minutes)

Concluding the session should be the quickest part of most studies. Participants should be explicitly directed to exit or uninstall any screen sharing tools and tracking software used during the session and also informed about how they will receive incentives for participating in the study. It’s good to give participants a way to contact the researcher, in case they come up with any other questions regarding the study. And don’t forget to say thank you!

Well, Bill, that does it. Let me just verify the email address where you would like us to send your incentive. [Verify necessary contact info.] The Amazon Gift Certificate usually takes 1–2 weeks to arrive and will be delivered via email. Check your junk mail folder if it doesn’t show up, and if you don’t receive it 2 weeks from today, send me an email at moderator_email@acme.com.

[Disable screen sharing.] I’ve removed the screen sharing plug-in, so no need to get rid of that yourself; it’s completely removed from your computer.

Do you have any last questions for me? Thanks so much for talking to me today; it’s been really helpful for us. Have a good day!

Preparing Observers for Testing

If you have colleagues or clients who plan on observing the sessions, you’ll have to give them their marching orders. Observers need to be aware of times when the sessions will be happening, how to use the screen sharing tool, and how to speak directly with the moderator during the session (which is one of the perks of remote testing).

A standard way to prepare observers is to send an email with a complete set of instructions one week before testing and then a follow-up reminder email the day before testing, with the same set of instructions appended. Our sample instruction email shown in the sidebar assumes you’re using Adobe Connect to do screen sharing and using live recruiting techniques, but we have instructions for other tools posted at our Remote Usability Web site (http://remoteusability.com).

On the day of testing, observers should be at their computers during testing hours, probably with a set of headphones so that they don’t have to hold the telephone all day long. You should let the observers know that depending on the scheduling and the method of recruiting, there may be significant downtime between each session. If you’re using live recruiting methods (as described in the following chapter), the amount of time between sessions may not be predictable, depending on the number of recruits you’re able to obtain.

One thing to note: if you’re using live recruiting techniques, you won’t know exactly when the testing sessions are going to be held, so your email will have to set observers’ expectations that there may be periods of downtime between sessions, so they should have other things at hand to keep busy with.

Observer Instructions

The following is a sample email you should send to observers at least a week in advance of testing. The main point of the email is to inform observers what they need to observe in the sessions, how to set up the screen sharing tools, and what the ground rules are for communicating with the moderator. You want to encourage observers to contribute helpful observations and advice to the moderator during the session, but advise them not to flood the moderator with distracting messages and have them generally defer to the moderator’s judgment in matters of running the session.

This letter assumes you’re using Adobe Connect, but other variations can be found on our Web site at http://remoteusability.com.

---------------------------------------

Hi everyone,

This email contains detailed instructions for how to observe the remote user research sessions, which will begin Next week.

[How to Set Up Adobe Connect]

For most of the screen sharing with our participants, we’ll be using a Web service called Adobe Connect. You don’t have to install anything now, but you will have to follow certain steps before each individual session. Here’s how you’ll be using it:

1 Go to the Web address at the beginning of the test session: connect.acrobat.com/XXXXX

2 Enter as a guest. Use a neutral guest name like “usability” or “user testing” so that the participant isn’t distracted by seeing an unfamiliar name.

3 Dial in to the conference number: (XXX) XXX-XXXX

4 Mute your phone! This is very important; otherwise, the user will be able to hear you, which will disrupt the session.

[Other Things to Note]

—Recruiting and testing will be conducted between the hours of Noon EST (9 AM PST) and 8 PM EST (5 PM PST), with a brief break around 3 PM EST so the researchers can get some lunch. Please be aware that for setup and recruiting processes, there may be several minutes of downtime between each session, so feel free to have something to work on while you wait for a new session to begin.

—Phone headsets are encouraged so that you don’t have to hold the phone with your shoulder all day.

—I can be reached on AOL Instant Messager at “aimscreenname.” To keep from swamping me with requests from multiple people at once, please designate one person who will direct any questions and comments to me as the sessions are ongoing. Feel free to point out any and all issues you think are important. I may not be able to handle all your requests, but I’ll try my best!

—If you have any other questions about how to follow along with the sessions or communicate with me during the sessions, feel free to email me.

That’s all! Looking forward to getting this study started!

Note

Researchers and Stakeholders

In this book we occasionally mention “researchers” and “stakeholders” because often-times the people conducting the research (“researchers”) are doing it on behalf of other people who have commissioned the research (“stakeholders”). Stakeholders—who can include business executives, managers who dictate the research budget, and so on—are usually untrained in research methods (never mind remote research methods), but they’re essential for defining the parameters of the study and should be involved in all steps of the testing. Of course, there are also situations in which the researchers are the stakeholders—e.g., academic research or companies in which the developers also do their own user testing. So, just to be clear: there aren’t always business stakeholders, and in those cases you can ignore the whole researcher/stakeholder dichotomy and the issues of “sign-off and approval” we occasionally bring up.

Chapter Summary

There’s always the chance you may have to go back and adjust something in the project goals or adjust the schedule for whatever reason; however, you can consider yourself pretty much finished with the setup phase and ready to move on to recruiting once you’ve completed the following:

 Selected, installed, and familiarized yourself with a screen sharing tool (and recording tool, if necessary).

 Set up your equipment: computer, high-speed wired Internet connection, monitor(s), two-line desk phone, phone headset, and any other necessary equipment.

 Arranged backup screen sharing tools, as well as any optional tools (recording, note taking, etc.).

 Completed standard user research project management steps (check out Chapter 5 of Mike Kuniavsky’s Observing the User Experience).

 Completed the recruiting screener design and recruiting test (covered in the next chapter).

 Drafted and familiarized yourself with the facilitator guide.

 Briefed observers on how to participate during testing days.

 Conducted a practice run of the test, to make sure everything’s working (especially if it’s your first time).

Got all that? Then you’re ready to start testing!

[1] Chapter 5 covers the finer points of moderating in detail; for more details on traditional moderating techniques, see Joseph Dumas and Beth Loring’s Moderating Usability Tests: Principles and Practices for Interacting.

Remote Research

Подняться наверх