Thomas Leavitt (Columbia University)
Abstract: Difference-in-Differences (DID) is a popular method for design-based causal inference. Design-based methods typically quantify uncertainty in inferences from a sample to a population via a sampling mechanism and from observed to counterfactual outcomes via an assignment mechanism. The canonical DID design reflects uncertainty only in inferences from a sample to a population. Counterfactual uncertainty is difficult to represent since the DID design's validity depends not on an assignment mechanism, but rather on an assumption about the difference between treated and control groups' average changes in outcomes over time had the treatment not occurred. To correctly characterize counterfactual uncertainty, I generalize the DID design to a probability distribution on the full space of treated and control differences in counterfactual trends, which is informed by predictive models fit to pre-treatment data. Resulting causal estimates are shown to equal the estimand, in expectation, under two alternative conditions and be robust to violations of each condition when models fit to pre-treatment data satisfy an observable property of predictive accuracy. In contrast to the canonical DID design, this method depends on conditions weaker than that of parallel trends, does not depend on the scale of the outcome and can account for both sampling and counterfactual uncertainty. The method is easily implemented via a simulation-based procedure and is illustrated via a study on the effect of terrorist attacks on subsequent voting in elections.