Virtual Room 3: Experimental Designs

Date: 

Thursday, July 16, 2020, 12:00pm to 1:30pm

 

Chair: Ludovic Rheault (University of Toronto)

 

Co-Host: Regan Johnston (McMaster University)

Elements of External Validity: Framework, Design, and Analysis

Author(s): Naoki Egami and Erin Hartman

Discussant: Daniel Hopkins (University of Pennsylvania)

 

External validity of randomized experiments has been a focus of long-standing methodological debates in the social sciences. However, in practice, discussions of external validity often differ in their definitions, goals, and assumptions, without making them clear. Moreover, while many applied studies recognize it as their potential limitations, unfortunately, few studies have explicit designs or analysis aimed towards externally valid inferences. In this article, we propose a framework, design, and analysis to address two central goals of external validity inferences — (1) assess whether the direction of causal effects is generalizable (sign-validity), and (2) generalize the magnitude of causal effects (effect-validity). First, we propose a formal framework of external validity to decompose it into four components, X-, Y-, T-, and C-validity (units, outcomes, treatments, and contexts) and clarify the source of potential biases. Second, we present assumptions required to make externally valid causal inferences, and we propose experimental designs to make such assumptions more plausible. Finally, we introduce a multiple-testing procedure to address sign-validity and general estimators of the population causal effects for the effect-validity. We illustrate our proposed methodologies through three applications covering field, survey, and lab experiments.

Introducing a Continuous Climate Concern Scale for the Use in Experimental Research

Author(s): Parrish Bergquist

Discussant: Stephanie Nail (Stanford University)

 

Climate change poses an existential threat to human civilization, and addressing it poses both technological and political challenges. Building public will is among the thorniest of the political challenges associated with addressing climate change. A large literature has emerged to help decision makers and advocates assess public views of climate change and public will for climate policy. This literature uses survey experiments, field experiments, and observational studies to gauge the effectiveness of various messages, frames, messengers, and events in shaping public views of this important problem. The literature suffers from inconsistency in measurement of key outcome variables, and in some cases from the use of noisy outcome measures that make it difficult to detect a small experimental effect. In this paper, I use over a decade of survey data from the Yale Program on Climate Change Communication to optimize a scale that uses several high-quality survey questions to measure latent concern about climate change and support for climate policy. I first use Principal Component Analysis to assess the dimensionality of the survey data and select a set of questions that load onto the first dimension. I then adapt Berinsky et al's (2019) method; these authors optimize an attention screen for the use in survey experiments. I use Bayesian Item-Response Theory modeling and Fisher's Information Criterion to assess the predictive influence of thirty candidate survey questions that have been asked consistently over time. I compare the predictive accuracy of scales containing varying numbers of high-quality questions, in order to optimize a scale that effectively models the the latent construct with the fewest survey items. I also show how the scale can be optimized to distinguish between individuals within various bandwidths of concern about climate change. I conclude by demonstrating the use of the scale in a survey experiment and a field experiment. The work builds on segmentation analyses using the same data, which enables categorizing of the US public into six audience segments based on their level of belief in, concern about, and support for policy to address climate change. In a sense, this work provides convergent validation for the "Six Americas" audience segmentation methodology. The IRT-based approach is distinct in the applied opportunities it opens. I produce a continuous, precise scale that opens new opportunities in the experimental climate opinion literature.


Add to Calendar