Virtual Room 1: Machine Learning

Date: 

Thursday, July 16, 2020, 12:00pm to 1:30pm

 

Chair: Suzanna Linn (Penn State University)

 

Co-Host: Anwar Mohammed (McMaster University)

Experimental Evaluation of Computer-Assisted Human Decision Making: Application to Pretrial Risk Assessment Instrument

Download Paper

Author(s): Kosuke Imai, Zhichao Jiang, James Greiner, Ryan Halen and Sooahn Shin

Discussant: Jonathan Mummolo (Princeton University)

 

Despite an increasing reliance on computerized decision making in our day-to-day lives, human beings still make highly consequential decisions. As frequently seen in business, healthcare, and public policy, recommendations produced by statistical models and machine learning algorithms are provided to human decision-makers in order to guide their decisions. The prevalence of such computer-assisted human decision making calls for the development of a methodological framework to evaluate its impact. Using the concept of principal stratification from the causal inference literature, we develop a statistical methodology for experimentally evaluating the causal impacts of machine recommendations on human decisions. We also show how to examine whether machine recommendations improve the fairness of human decisions. We apply the proposed methodology to the randomized evaluation of a pretrial risk assessment instrument (PRAI) in the criminal justice system. Judges use the PRAI when deciding which arrested individuals should be released and, for those ordered released, the corresponding bail amounts and release conditions. We analyze how the PRAI influences judges’ decisions and impacts their gender and racial fairness.

Improving Variable Importance Measures

Download Paper

Author(s): Zenobia Chan and Marc Ratkovic

Discussant: Santiago Olivella (University of North Carolina at Chapel Hill)

 

Boosting and random forests are among the best off-the-shelf prediction tools. These methods offer a variable importance measure (VIM), which is a cumulative measure of the improvement in accuracy over the algorithm. We show existing variable importance measures, as implemented, are biased, returning positive scores on irrelevant variables. Intuitively, if a variable is irrelevant but correlates with a relevant variable, this correlation may lead to an improvement in performance may be misattributed to the irrelevant variable. We introduce a method that removes this bias. The method works by separating each predictor into a component explained by other predictors (a “predicted variable”), and a component not (a “partialed out variable”). We assess variable importance only through any improvement attributable to the latter. We prove the method returns a valid VIM, meaning it is mean-zero and asymptotically normal for irrelevant variables. Simulation evidence and applications to UCI data suggest the method also performs favorably relative to several existing machine learning methods in terms of predictive accuracy.


Add to Calendar