Читать книгу Handbook of Web Surveys - Jelke Bethlehem - Страница 56
EXAMPLE 3.4 Paper versus web: errors comparison
ОглавлениеIn the transition phase from paper to web for the business survey (structural business statistics [SBS] survey) conducted in Italy, errors in the paper and web questionnaire are compared. From Table 3.1, it is evident there is a general improvement in various types of errors: less need for checking (checks pending) in web data, a smaller average number of errors corrected, and fewer replacements.
Table 3.1 Types of detected errors by response mode (averages on total responses: SBS survey)
Checks pending | Corrected errors | Replacements | |
---|---|---|---|
Paper | 0.89 | 4.74 | 4.89 |
Web | 0.85 | 2.88 | 3.39 |
The third important sub‐step in questionnaire design is Visualization. This is a critical point in a web/mobile web survey. Colors, pictures, character formats, and the presence or absence of a progress bar are all factors affecting the interviewee's perception and could greatly improve or reduce response errors (response values elated to the interpretation of the content and of the questions, item nonresponses, decision to participate in the survey, etc.). Colors, for instance, affect the readability of the screen, possibly making completion less pleasant. Dark and highly contrasted colors are more difficult to read, as well as too hell colors. Formats and pictures have a different impact if presented on a PC screen rather than on a smartphone screen. On a small screen pictures are disturbing. Thus, the visual readability of the questionnaire is essential to enhance participation, not increase measurement errors (due to bad understanding of the questions or distraction due to not adequate—in the content and in the size—pictures).
In Designing the questionnaire, it is important considering that the questionnaire will be answered using different devices (desktop, laptop, tablet, smartphone) (see details in Chapter 5). The question with error message of Figure 3.5 shows how it looks like on a smartphone. If a mobile device accesses a website without a mobile version, the user will still be able to navigate the page. However, differences in screen size will usually require the user to perform gestures or scrolling in order to browse the content in its entirety. Therefore, if a smartphone accesses a nonmobile version of an online survey, then it is likely that the respondent will see only a portion of the content/question or question completion may require zooming first in order to select desired response. Some question types (e.g., multiple choice, lists) will not generally take advantage of the smartphone's native features. Peytchev and Hill (2010) reported the results for a series of experiments comparing various aspects of questionnaire design and layout, including horizontal scrolling, number of questions per screen, direction of response options, impact of embedded images, and the use of open‐ended options, among others, using a smartphone. Couper and Mavletova (2014) explore the effect of scrolling versus paging design on the break‐off rate, item nonresponse, and completion time in mobile web surveys. The scrolling design leads to significantly faster completion times, lower (though not statistically significant) break‐off rates, fewer technical problems, and higher subjective ratings of the questionnaire. In general, web/mobile web survey design display should be accurately adapted on mobile phones, netbooks, and tablets as they do on desktop and laptop computers. Let's say a mixed devices approach should be adopted. When the questionnaire design is completed and the paradata decided, the software or the selection of the programming language to implement the web questionnaire is undergone, and the digital version of the questionnaire (including paradata and metadata) is created.
Figure 3.5 Message for asking answer to a compulsory question: smartphone device
Following the implementation of the web questionnaire and the sample selection, the fourth main step of the survey process is the Collecting data. Conducting data collection implies sending the web questionnaire link to the sampled units along with an invitation letter for survey participation (see Chapter 7) and monitoring the data collection process, for example, by sending solicitations and, eventually, applying the responsive design (see Chapter 8) to better finalize the sampled units' participation.
Data processing takes place from the fifth step forward. Processing data—step 1 is on database creation, where many error risks need identification and their corrections adopted. For instance, item nonresponse must be considered, the reasons for them evaluated, and, when necessary, imputation methods applied (sub‐step Data imputation). Coding should also be considered a sub‐step (sub‐step Code open questions). Erroneous coding could cause misinterpretation of the survey results. Compilation of the questionnaire automatically generates the database and avoids the significant risk of errors connected with data transcription. It is a great advantage of web surveys; data errors are only due to incoherence or respondent little attention. Therefore, a data quality check is still necessary in web surveys, even if the risks for errors are smaller than in traditional paper‐and‐pencil interviews.
Once the checked database has been created, the second phase of data processing takes place (Processing data—step 2). Note that several computations and activities carried out in the second phase of data processing are undertaken according to decisions done at the Designing the web survey step. Application of the estimation and weighting techniques allows for the extension of the survey results to the target population and to present the survey results (sub‐steps: Estimation technique choice, Calculation of weights and estimators, and Process data and produce tables; see Chapters 12 and 13). Estimation procedures are like to those applied in the traditional survey modes. However, several aspects encompass many advantages that are specific to the web. In many cases, it could be possible to link the survey microdata to another database providing individual data for auxiliary variables. A key variable useful for connecting the two databases at the individual level must be available though. Through database integration, modeling survey results, understanding participation behavior, and better profiling the respondents are possible. Integration with administrative databases, paradata databases, or other survey databases could be undergone.
Thus, in selecting the estimation techniques, the researcher needs to integrate the survey database (sub‐step: Integrate datasets) with another database (e.g., an administrative database or a paradata database). This is useful when the researcher has planned to use auxiliary variables in the data processing, to compute weights, or to improve estimates. For instance, propensity scores technique (Chapter 13) is one approach that uses auxiliary variables for estimation purposes.
After deciding possible integration and applying the appropriate estimation and weighting technique, the data processing ends up; tables and figures are then produced to synthesize the survey results. Presenting the survey results includes offering quality indicators as well. AAPOR (2016) provides a detailed description of rates and indicators and emphasizes that all survey researchers should adopt the standardized final disposition for the outcome of the survey. For example, response rate calculation is in many different ways. Not to simply state “the response rate is X” is important. The researcher should exactly name the rate he is talking. In particular, there are two types of response rate:
1 The response rate type 1 (RR1), i.e., the number of complete interviews divided by the number of interviews plus the number of non‐interviews (i.e. refusal, break‐off plus non‐contacts and others, plus all cases of unknown eligibility);
2 The response rate type 2 (RR2), where also partial interviews are considered at the numerator of the rate.
In web surveys, as discussed in Chapter 5, in regard to possible errors that might occur throughout the process, quality also uses specific indicators that take advantage of the paradata collected by the survey system during the survey process. For example, it is possible to compute the time spent in the completion of the questionnaire.
The final step, the End of the survey process, involves writing the conclusions. Usually, a final reporting is written. A clear and consistent reporting of the methods complements comments about substantive research results. This is a key component for a methodologically reliable survey. AAPOR (2016) is stressing this idea. A set of standardization of the outcome details and outcome rates has been proposed, and they should be available and part of a survey documentation.