Читать книгу Handbook of Web Surveys - Jelke Bethlehem - Страница 45

2.2.4 TRENDS IN WEB SURVEYS

Оглавление

Dillman, Smyth, and Christian (2014) describe some changes in the survey environment from the 1970 to 2000, focusing on the factors like human interaction, trust that the survey is legitimate, time involvement for each respondent, attention given to each respondent, respondent control over access, and respondent control over whether to respond. These observations indicate that during the 1990s human interaction and individual contact relevance are decreasing due to the use of IT (i.e., computer‐aided and web surveys) and to massive use of e‐mails. Trust on survey relevance and legacy is very low, and the possibility of refusal and of filtering against surveys (anti‐spam, disclosure rules) is very high. These observations are in line with developments with respect to web surveys.

More recently, Biffignandi (2010a, 2010b) has been focusing on major trends with respect to web surveys. The author underlines that survey methodology has quite recently undergone a paradigm shift: the causes of survey errors and how to prevent them. This new paradigm stresses the total error concept, which includes sampling error (the central error in the traditional approach) and non‐sampling errors. Non‐sampling errors can be numerically larger than sampling errors. The flowchart of Chapter 3 describes the survey process. Errors might occur at each step of the process. The new paradigm focuses on the following aspects:

 All kinds of events and behavior occurring during the survey process are considered.

 The overall response rate is only a very simple measure of survey quality, although it is frequently used as an indicator. To some extent this measure could be useful in identifying weak points in the process (for instance, a large amount of refusal might be due to a bad‐contact process), but it fails to consider that people who have web access, or that are respondent in a web survey, could significantly differ from other units.

 Overall response rates do not give information regarding the response propensity of different respondent subgroups (late respondents versus early respondents, and sociodemographically different subgroups) or on respondent behavior.

 Response rates are anyway becoming low, and there is a need to investigate the reasons. Incentive is thought as a possible solution. Göritz (2006, 2010, 2015) and Brown et al. (2016) suggest a generally positive effect of incentives in web surveys. Singer and Ye (2013) conclude that in all survey modes, prepaid cash incentive is the most effective. In this case, it is required to use a mode rather than web to contact respondents. If the e‐mail contact option is adopted, Dillman, Smyth, and Christian (2014) comment that electronic incentive sent to all sample member is likely the best option. With respect to sampling error, due to imperfect frames in web surveys, traditional probabilistic samples are in many cases not easy to implement. Therefore, it is not possible to compute the sampling error, as the theory of statistical inference does not apply.

As consequence of this new paradigm, attention is going at:

 How to face decreasing response rates. Possible solutions may be:Keeping respondents focused on the relevant parts of the computer screen and keeping distraction to a minimum can help to get completed questionnaire. To accomplish this task, studies based on eye‐tracking analysis are to be carried out.An interesting strategy for improving response rates is to use mixed‐mode surveys (see Chapters 3 and 9). However, new problems arise with the mixed approach, since mode effects are to be considered in analyzing survey results. Occurrence and treatment of mixed‐mode effects need further investigation. Chapter 9 is about it.

 How to use paradata (i.e., data collected during the interviewing process). Increasing attention is going to be devoted to the analysis of this type of data. In particular, they help to identify typologies of response behavior explaining the potential variations in participation in web‐based surveys and providing a valuable insight into understanding nonresponse and various aspects of response behavior. From the methodological point of view, behavioral analyses rely to the Cognitive Aspects of Survey Methodology Movement (CASM), and, in many empirical studies, the theory of planned behavior (TPB) model is applied (Ajzen, 1991). The main objective is to obtain a more comprehensible picture on how intentions form. For example, based on the TPB, two alternative models were empirically tested, in which the roles of trust and innovativeness were theorized differently—either as moderators of the effects that perceived behavioral control and attitude have on participation intention (moderator model) or as direct determinants of the attitude, perceived behavioral control, and intention (direct effects model).

 How to get representative web surveys and/or panels? Many access panels consist of volunteers, and it is impossible to evaluate how well these volunteers represent the general population. In any case, they represent a non‐probability sample. Recent research attempts to tackle the task of how to apply probabilistic recruitment to panels and how to draw inferences from them are present in recent literature. One approach to correct for a lack of representativity is, for example, to apply propensity score methodology (Steinmetz et al., 2014). Propensity scores serve to reweight web survey results.

Generally speaking the methodology and quality of data collected in area of socioeconomics could greatly benefit by:

1 The development of suitable estimation methods aimed at capturing the bias and specific variance connected with the frame characteristics and participation process of this type of survey;

2 Research, principally based on experimental designs, allowing the effects of various factors to be tested (for example, the effects of different types of question structure, various contact modes, etc.);

3 Research, based on behavioral models, that allows response and participation processes to be analyzed and modeled in the context of the individual behavior of survey respondents.

Handbook of Web Surveys

Подняться наверх