Abstract: In today’s data-driven society, human beings still make most critical decisions and yet they are increasingly utilizing recommendations produced by statistical and machine learning methods. Given the prevalence of this approach in many areas of the society including business, healthcare, and public policy, there exists an urgent need to evaluate the impact of such computer-assisted human decision making. Using the potential outcomes framework of causal inference, we develop a statistical methodology for assessing how human decisions are influenced by computer-generated inputs. We apply the proposed methodology to the randomized evaluation of a pretrial risk assessment instrument (PRAI) in the criminal justice system. The PRAI is used by judges in many states when making pretrial decisions about whether an arrested individual should be released and under what conditions. A key methodological challenge is to infer whether an arrestee would commit a new crime if released. Hence, this can be formulated as a missing data problem. We analyze how this instrument influences judges’ decisions and derive an optimal PRAI to help judges satisfy a range of possible fairness criteria.