Moving beyond computational questions in digital ethics research
In the ever expanding digital ethics literature, a number of researchers have been advocating a turn away from enticing technical questions—how to mathematically define fairness, for example—and towards a more expansive, foundational approach to the ethics of designing digital decision systems.
A 2018 paper by RODRIGO OCHIGAME, CHELSEA BARABAS, KARTHIK DINAKAR, MADARS VIRZA, and JOICHI ITO is an exemplary paper along these lines. The authors dissect the three most-discussed categories in the digital ethics space—fairness, interpretability, and accuracy—and argue that current approaches to these topics may unwittingly amount to a legitimation system for unjust practices. From the introduction:
“To contend with issues of fairness and interpretability, it is necessary to change the core methods and practices of machine learning. But the necessary changes go beyond those proposed by the existing literature on fair and interpretable machine learning. To date, ML researchers have generally relied on reductive understandings of fairness and interpretability, as well as a limited understanding of accuracy. This is a consequence of viewing these complex ethical, political, and epistemological issues as strictly computational problems. Fairness becomes a mathematical property of classification algorithms. Interpretability becomes the mere exposition of an algorithm as a sequence of steps or a combination of factors. Accuracy becomes a simple matter of ROC curves.
In order to deepen our understandings of fairness, interpretability, and accuracy, we should avoid reductionism and consider aspects of ML practice that are largely overlooked. While researchers devote significant attention to computational processes, they often lack rigor in other crucial aspects of ML practice. Accuracy requires close scrutiny not only of the computational processes that generate models but also of the historical processes that generate data. Interpretability requires rigorous explanations of the background assumptions of models. And any claim of fairness requires a critical evaluation of the ethical and political implications of deploying a model in a specific social context.
Ultimately, the main outcome of research on fair and interpretable machine learning might be to provide easy answers to concerns of regulatory compliance and public controversy”
- Among their references is an illuminating doctoral dissertation by Caley Horan examining the history of the American insurance industry after 1945. A chapter focussing on activist critiques of actuarial discrimination in redlining through the 1970s and 80s proves highly relevant for current debates over algorithmically-enhanced discrimination. “The failure of activists to secure legislation and their demands for more precise statistical measurements in the field of insurance underwriting reflected the diminishing utility of rights-based frameworks in combating discrimination in insurance and signaled the triumph of a new, actuarial, understanding of political community as structured around the notion of risk… Once insurers were granted their own unique definition and claim to fairness separate from that of society writ large, the application of social legislation became nearly impossible.” Link.
- Another 2018 paper by four out of the five listed co-authors proposes alternate uses for ML in addressing complex social problems. “We posit that machine learning should not be used for prediction, but rather to surface covariates that are fed into a causal model for understanding the social, structural, and psychological drivers of crime. We propose an alternative application of machine learning and causal inference away from predicting risk scores to risk mitigation.” Link.
New Researchers: TRANSIT(ORY) SHOCKS
A small traffic fine can have a huge negative effect
In his job market paper, Princeton PhD candidate STEVEN MELLO brings data to the oft-discussed issue of financial insecurity. Mello finds that “among the poorest quartile of drivers… the increases in financial strain induced by a 175 dollar fine are observationally similar to what would be predicted by a 900 dollar earnings decline.”
Mello also explains the policy implications, and how traffic fines can be self-defeating:
“Using back-of-the-envelope calculations and a standard willingness-to-pay framework, a conservative estimate of the welfare cost associated with the average ticket is about $500. Intuitively, this quantity has a policy-relevant interpretation. To the extent that welfare costs are greater than the revenue raised and public safety produced by an additional traffic citation, there is deadweight loss associated with ticketing. Governments who do not consider the outsized welfare costs of citations will generally choose to overpolice.”
Link to the full paper here.
Each week we highlight great work from a graduate student. Have you read any excellent student work recently that you’d like to see shared here? Send it our way: firstname.lastname@example.org.
+ + +
- JFI will be at the Harvard Startup Career Fair on Friday February 15. Come talk to us if you’re in the area. Link.
- Guy Standing at the World Economic Forum reviews the results of a year in basic income experimentation. Link.
- From Mark Stelzner and Mark Paul at Equitable Growth, a December paper that models monopsony conditions’ effect on wages taking into account “collective action and efficient contract bargaining.” Link.
- In the LRB: Paul Taylor on the trade in medical records. Link.
- Dylan Matthews at Vox covers the anti-poverty plans from five Democratic 2020 presidential candidates, with evaluation conducted by our friends at Columbia University’s Center on Poverty & Social Policy. Link.
- New work by Caroline Hoxby and Sarah Turner examines how we measure whether universities provide opportunities for low-income students: “We demonstrate that, with well-thought-out data analysis, it is possible to create benchmarks that actually measure what they are intended to measure. In particular, we present a measure that overcomes the deficiencies of the popular measures and is informative about all, not just low-income, students.” Link. ht Sidhya
- Tangentially related to the above, new research from Seth Zimmerman examines whether elite colleges promote low-income students’ achieving “top positions in the economy.” Link.
- An excellent post by Rahul Menon on the “neoliberal” character of British colonial economic policy. Link.
- For the Green New Deal Files: At the People’s Policy Project, Matt Bruenig with a paper proposal outlining how the Tennessee Valley Authority could be used to decarbonize electricity across the country. Link.
- More ed research: Benjamin Marx and Lesley Turner demonstrate the immense benefits of borrowing for community college attainment. Link. (Highlight thread from Professor Susan Dynarski. Link.)
- “Although demand for wax was high across Europe, production itself was unevenly spread. In northern and central Europe high medieval urbanization and settlement expansion came at the expense of favourable bee habitats. This meant that the areas with the greatest need for wax were under intense pressure to meet demand through local production. These regions were therefore especially attractive to merchants bringing wax from the Baltic hinterland, where large-scale sylvan wax production took place in forests which had not been felled to make room for arable fields. This high-quality wax became an important feature of Hanseatic trade, and a brisk westward trade brought this wax ‘de Polane’ to England and Bruges where eager buyers were readily found.” Alexandra Sapoznik on the impact of religious beeswax consumption on the medieval economy. Link.
Each week we highlight research from a graduate student, postdoc, or early-career professor. Send us recommendations: email@example.com.