Yuki Atsusaka (Rice University), Randy Stevenson (Rice University) and Ahra Wu (Dartmouth College)
Abstract: The crosswise model is an increasingly popular survey technique to elicit candid answers from respondents in sensitive questions. We demonstrate, however, that conventional crosswise estimators for the true prevalence of sensitive attributes are biased toward limit values (0 or 1) due to the presence of inattentive responses, which could make researchers mistakenly believe that they elicited more candid answers when using crosswise models. In this article, we propose a simple bias-correction to the conventional crosswise estimators. We demonstrate that our bias-corrected estimators are more efficient, easily implemented without measuring individual attentiveness, and applicable to statistical models where crosswise estimates are used either as the outcome or a predictor. We also offer a sensitivity analysis for conventional crosswise estimates and apply it to six existing studies. Our sensitivity analysis suggests that the original findings -- crosswise estimates are higher than direct questioning estimates -- might be mostly artifacts of inattentive responses. Moreover, we offer a weighting strategy for online opt-in samples and power analysis when using our bias correction. We illustrate the proposed methodology by applying it to an online survey where Qualtrics paid-survey takers are asked about their survey-taking behavior. Finally, we provide a practical guide for designing surveys to enable our proposed bias-correction.