Читать книгу The Research Experience - Ann Sloan Devlin - Страница 135
Instrumentation
ОглавлениеInstrumentation is a threat that is more easily managed, in principle. With regard to instrumentation, for example, the conditions of projecting images for participants to view, being prepared is the best course of action. Knowing how to use all of the equipment is important (e.g., what to do when you get a “no signal” message for LCD projection or what to have participants do when a survey link doesn’t load). Routinely checking calibration of equipment is advisable.
Another possible threat to internal validity in the category of instrumentation involves your measures. Make sure you include all of your items, and make sure that your participants have looked at all the pages of the questionnaire, if you are administering a paper version. When administering questionnaires online, it is possible to prompt participants to check that they have answered all of the items they intended to answer; such prompts help cut down on missed items.
Another kind of “instrument” is the researcher. If the researcher is giving task instructions, it is important to follow a script to make sure every participant receives the same information. Some researchers record instructions and other material delivered in spoken form to ensure that participants hear the same speaking voice, with the same pace.
Instrumentation: One of Campbell and Stanley’s (1963) threats to internal validity in which changes in equipment and/or observers affect judgments/measurements that are made.
Operational definitions: Describes a variable in terms of the processes used to measure or quantify it.
Statistical regression: One of Campbell and Stanley’s (1963) threats to internal validity when participants are selected on the basis of extreme scores (e.g., high or low intelligence) and their scores move toward the mean on subsequent testing.
Differential selection (biased selection of subjects): One of Campbell and Stanley’s (1963) threats to internal validity in which participants assigned to groups are not equivalent on some important characteristic prior to the intervention.
Another kind of issue involving the researcher is the subjective evaluation of participants’ responses. Consider the situation where participants are giving responses to open-ended questions (i.e., questions where participants are free to answer as they wish and do not have preset categories from which to select) and the researcher is categorizing those responses. It is essential that the criteria for each category remain consistent across coders. One way this is accomplished is by creating clear operational definitions for each category. Operationally defining a variable is describing it in terms of the processes used to measure or quantify it. Imagine if researchers were categorizing qualities of the hospital environment in terms of Roger Ulrich’s (1991) theory of supportive design: positive distraction (PD), social support (SS), and perceived control (PC). If patients mentioned that having access to the Internet improved their experience, we would need an operational definition of each category to place the Internet in one of them. Is the Internet an aspect of positive distraction (something that redirects your attention away from worries and concerns), or is it an aspect of social support (a way to connect with others or encourage interaction)? Arriving at an operational definition can be challenging.