Countries with more democratic political regimes experienced greater GDP loss and more deaths from COVID-19 in 2020. Using ﬁve diﬀferent instrumental variable strategies, we ﬁnd that democracy is a major cause of the wealth and health losses. This impact is global and is not driven by China and the US alone. A key channel for democracy’s negative impact is weaker and narrower containment policies at the beginning of the outbreak, not the speed of introducing policies.
Democracy is widely believed to contribute to economic growth and public health. However, we ﬁnd that this conventional wisdom is no longer true and even reversed; democracy has persistent negative impacts on GDP growth since the beginning of this century. This ﬁnding emerges from ﬁve diﬀerent instrumental variable strategies. Our analysis suggests that democracies cause slower growth through less investment, less trade, and slower value-added growth in manufacturing and services. For 2020, democracy is also found to cause more deaths from Covid-19.
Algorithms produce a growing portion of decisions and recommendations both in policy and business. Such algorithmic decisions are natural experiments (conditionally quasirandomly assigned instruments) since the algorithms make decisions based only on observable input variables. We use this observation to develop a treatment-eﬀect estimator for a class of stochastic and deterministic algorithms. Our estimator is shown to be consistent and asymptotically normal for well-deﬁned causal eﬀects. A key special case of our estimator is a high-dimensional regression discontinuity design. The proofs use tools from diﬀerential geometry and geometric measure theory, which may be of independent interest.
The practical performance of our method is ﬁrst demonstrated in a high-dimensional simulation resembling decision-making by machine learning algorithms. Our estimator has smaller mean squared errors compared to alternative estimators. We ﬁnally apply our estimator to evaluate the eﬀect of Coronavirus Aid, Relief, and Economic Security (CARES) Act, where more than $10 billion worth of relief funding is allocated to hospitals via an algorithmic rule. The estimates suggest that the relief funding has little eﬀect on COVID- 19-related hospital activity levels. Naive OLS and IV estimates exhibit substantial selection bias.
Centralized school assignment algorithms must distinguish between applicants with the same preferences and priorities. This is done with randomly assigned lottery numbers, nonlottery tie-breakers like test scores, or both. The New York City public high school match illustrates the latter, using test scores, grades, and interviews to rank applicants to screened schools, combined with lottery tie-breaking at unscreened schools. We show how to identify causal eﬀects of school attendance in such settings. Our approach generalizes regression discontinuity designs to allow for multiple treatments and multiple running variables, some of which are randomly assigned. Lotteries generate assignment risk at screened as well as unscreened schools. Centralized assignment also identiﬁes screened school eﬀects away from screened school cutoﬀs. These features of centralized assignment are used to assess the predictive value of New York City’s school report cards. Grade A schools improve SAT math scores and increase the likelihood of graduating, though by less than OLS estimates suggest. Selection bias in OLS estimates is egregious for Grade A screened schools.
What is the most statistically eﬀicient way to do oﬀ-policy optimization with batch data from bandit feedback? For log data generated by contextual bandit algorithms, we consider oﬀline estimators for the expected reward from a counterfactual policy. Our estimators are shown to have lowest variance in a wide class of estimators, achieving variance reduction relative to standard estimators. We then apply our estimators to improve advertisement design by a major advertisement company. Consistent with the theoretical result, our estimators allow us to improve on the existing bandit algorithm with more statistical conﬁdence compared to a state-of-theart benchmark.
Many centralized school admissions systems use lotteries to ration limited seats at oversubscribed schools. The resulting random assignment is used by empirical researchers to identify the eﬀect of entering a school on outcomes like test scores. I ﬁrst ﬁnd that the two most popular empirical research designs may not successfully extract a random assignment of applicants to schools. When do the research designs overcome this problem? I show the following main results for a class of data-generating mechanisms containing those used in practice: One research design extracts a random assignment under a mechanism if and practically only if the mechanism is strategy-proof for schools. In contrast, the other research design does not necessarily extract a random assignment under any mechanism.
In centralized school admissions systems, rationing at oversubscribed schools often uses lotteries in addition to preferences. This partly random assignment is used by empirical researchers to identify the eﬀect of entering a school on outcomes like test scores. This paper formally studies if the two most popular empirical research designs successfully extract a random assignment. For a class of data-generating mechanisms containing those used in practice, I show: One research design extracts a random assignment under a mechanism if and almost only if the mechanism is strategy-proof for schools. In contrast, the other research design does not necessarily extract a random assignment under any mechanism.