Libby Jenke, Kirk Bansak, Jens Hainmueller and Dominik Hangartner
Yamil Velez, Jason Barabas and John Kane
Chair: Justin Esarey (Wake Forest University)
Co-Host: Anwar Mohammed (McMaster University)
Using Eye-Tracking to Understand Decision-Making in Conjoint Experiments
Author(s): Libby Jenke, Kirk Bansak, Jens Hainmueller and Dominik Hangartner
Discussant: Anton Strezhnev (New York University)
Conjoint experiments enjoy increasing popularity in political and social science, but there is a paucity of research on respondents' underlying decision-making processes. We leverage eye-tracking methodology and a conjoint experiment, administered to a subject pool consisting of university students and local community members, to examine how respondents process information when completing conjoint surveys. Our study has two main findings. First, we find a positive correlation between attribute importance measures inferred from the stated choice data and attribute importance measures based on eye movement. This validation test supports the interpretation of common conjoint metrics, such as Average Marginal Component Effects and marginal R2 values, as valid measures of attribute importance. Second, when we experimentally increase the number of attributes and profiles in the conjoint table, respondents on average view a larger absolute number of cells but a smaller fraction of the total cells displayed, and the patterns in which they search between cells change conditionally. At the same time, however, their stated choices remain remarkably stable. This overall pattern speaks to the robustness of conjoint experiments and is consistent with a bounded rationality mechanism. Respondents can adapt to complexity by selectively incorporating relevant new information to focus on the important attributes, while ignoring less relevant information to reduce the cognitive processing costs. Together, our results have implications for both the design and interpretation of conjoint experiments.
Analyze the Attentive and Bypass Bias: Using Mock Vignettes in Survey Experiments
Author(s): Yamil Velez, Jason Barabas and John Kane
Discussant: Erin Hartman (UCLA)
Respondent inattentiveness threatens to undermine experimental studies. In response, researchers incorporate measures of attentiveness into their analyses, yet often in a way that risks introducing post-treatment bias. We offer a new, design-based technique—mock vignettes (MVs)—to overcome these interrelated challenges. MVs feature content substantively similar to that of experimental vignettes in political science, and are followed by factual-question checks (MVCs) to gauge respondents’ attentiveness to the MV. Crucially, the same MV is viewed by all respondents prior to the experiment. Across five separate studies, we find that MVC performance is positively associated with (1) other attentiveness measures, as well as (2) stronger treatment effects. Researchers can thus use MVC performance to re-estimate treatment effects, allowing for hypothesis tests that are more robust to respondent inattentiveness yet also safeguarded against post-treatment bias. Lastly, our study offers researchers a set of ready-made, empirically-validated MVs for their own experiments.
Add to Calendar