Virtual Room 3: Sample Selection

Date: 

Tuesday, July 14, 2020, 12:00pm to 1:30pm

 

Chair: Ludovic Rheault (University of Toronto)

 

Co-Host: Regan Johnston (McMaster University)

How You Ask Matters: Interview Requests as Network Seeds

Download Paper

Author(s): AshLee Smith, Jane L. Sumner and Josef Woldense

Discussant: Jennifer Bussell (University of California, Berkeley)

 

When recruiting interview subjects is the goal, building rapport is conventionally heralded as the superior method. Cold-emails, in contrast, are often dismissed as inferior for their low response rate. Our study suggests that this stance is mistaken. When it is elites who are to serve as interview subjects, we argue that cold-emails can yield tremendous benefits that have thus far been overlooked. More specifically, we posit that when paired with network effects, which are rooted in the linkages among elites, cold-emails can outperform the standard but costly interview solicitation method of building rapport with subjects. In a series of experiments and simulations, we show that small changes to the wording of cold-emails translates into greater network coverage, thereby offering researchers a richer set of insights from their interview subjects.

How Much Should You Trust Your Power Calculation Results? Power Analysis as an Estimation Problem

Download Paper

Author(s): Shiyao (Sean) Liu and Teppei Yamamoto

Discussant: Clayton Webb (University of Kansas)

 

With the surge of randomized experiments and the introduction of pre-analysis plans, today’s political scientists routinely use power analysis when designing their empirical research. An often neglected fact about power analysis in practice, however, is that it requires knowledge about the true values of key parameters, such as the effect size. Since researchers rarely possess definitive knowledge of these parameter values, they often rely on auxiliary information to make their best guesses. For example, survey researchers commonly use pilot studies to explore alternative treatments and question formats, obtaining effect size estimates to be used in power calculations along the way. Field experimentalists often use evidence from similar studies in the past to calculate the minimum required sample size for their proposed experiment. Common across these practices is the hidden assumption that uncertainties about those often empirically obtained parameter values can safely be neglected for the purpose of power calculation.

In this paper, we show that such assumptions are often consequential and sometimes dangerous. We propose a conceptual distinction between two types of power analysis: empirical and non-empirical. We then argue that the former should be viewed as an estimation problem, such that their properties as estimators (e.g., bias, sampling variance) can be formally quantified and investigated. Specifically, we analyze two commonly used variants of empirical power analysis – power estimation and minimum required sample size (MRSS) estimation – asking how reliable these analyses can be under scenarios resembling typical empirical applications in political science. The results of our analytical and simulationbased investigation reveal that these estimators are likely to perform rather poorly in most empirically relevant situations. We offer practical guidelines for empirical researchers on when to (and not to) trust power analysis results.


Add to Calendar