Expanding the frame for formalizing fairness
In the digital ethics literature, there’s a consistent back-and-forth between attempts at designing algorithmic tools that promote fair outcomes in decision-making processes, and critiques that enumerate the limits of such attempts. A December paper by ANDREW SELBST, dana boyd, SORELLE FRIEDLER, SURESH VENKATASUBRAMANIAN, and JANET VERTESI—delivered at FAT* 2019—contributes to the latter genre. The authors build on insights from Science and Technology Studies and offer a list of five “traps”—Framing, Portability, Formalism, Ripple Effect, and Solutionism—that fair-ML work is susceptible to as it aims for context-aware systems design. From the paper:
“We contend that by abstracting away the social context in which these systems will be deployed, fair-ML researchers miss the broader context, including information necessary to create fairer outcomes, or even to understand fairness as a concept. Ultimately, this is because while performance metrics are properties of systems in total, technical systems are subsystems. Fairness and justice are properties of social and legal systems like employment and criminal justice, not properties of the technical tools within. To treat fairness and justice as terms that have meaningful application to technology separate from a social context is therefore to make a category error, or as we posit here, an abstraction error.”
In their critique of what is left out in the formalization process, the authors argue that, by “moving decisions made by humans and human institutions within the abstraction boundary, fairness of the system can again be analyzed as an end-to-end property of the sociotechnical frame.” Link to the paper.
- A brand new paper by HODA HEIDARI, VEDANT NANDA, and KRISHNA GUMMADI attempts to produce fairness metrics that look beyond “allocative equality,” and directly grapples with the above mentioned “ripple effect trap.” The authors “propose an effort-based measure of fairness and present a data-driven framework for characterizing the long-term impact of algorithmic policies on reshaping the underlying population.” Link.
- In the footnotes to the paper by Selbst et al, a 1997 chapter by early AI researcher turned sociologist Phil Agre. In the chapter: institutional and intellectual history of early AI; sociological study of the AI field at the time; Agre’s departure from the field; discussions of developing “critical technical practice.” Link.
New Researchers: SPATIAL NETWORKS
Questioning incentives for early cross-border migration
In his job market paper, DAVID ESCAMILLA-GUERRERO, PhD Candidate in Economic History at the London School of Economics, explores the earliest wave of migration from Mexico to the United States. Using records from 10,895 individual border crossings between 1906 and 1908, he analyzes the spatial distribution of the migration flow as well as the incentives driving it. The paper’s findings challenge the consensus that most immigrants came from the densely populated Bajio region, and dispute theories which identify the Mexico-US wage gap and the expansion of Mexico’s railway system as the primary incentives for migration. Instead, they indicate that most immigrants came from the border region, and that flows were consistently driven by social capital formation through immigrant networks:
“… for more than one hundred years, Mexican immigrant networks have been a self-perpetuating social asset that provides information and assistance, which reduces the costs and risks of migrating… Hence, since its beginnings, the Mexico-US migration flow has been influenced by forces that are commonly not analyzed by policy makers. An integral migratory policy should consider the different incentives behind the migration decision as well as their evolution along time and across Mexican regions.”
Each week we highlight great work from a graduate student, postdoc, or early-career professor. Have you read any excellent research recently that you’d like to see shared here? Send it our way: email@example.com.
- In a new paper titled “Universal Basic Income in the Developing World,” Abhijit Banerjee, Paul Niehaus, and Tavneet Suri discuss the perceived limitations of implementing a basic income through piecemeal, targeted approaches. The paper examines “what scholars know (and what they do not) about three questions: what recipients would likely do with the incremental income, whether this would unlock further economic growth, and the potential consequences of giving the money to everyone (as opposed to targeting it).” Link.
- Bryce Covert reports on the tragic impacts that work requirements have had on Medicaid recipients in Arkansas. Link. ht Steve
- “This paper shows that where there was once a sharp border… where labor law ended and antitrust began, there is now a considerable legal gray area.” Marshall Steinbaum on the erosion of antitrust law and its consequences for the bargaining power of labor. Link.
- Geoffrey Irving and Amanda Askell on the complexities of developing AI systems which reflect human values, and the need for greater cooperation between machine learning researchers and social scientists: “If we learn values by asking humans questions, we expect different ways of asking questions to interact with human biases in different ways… Although we have candidates for ML methods which try to learn from human reasoning, we do not know how they behave with real people in realistic situations.” Link.
- An interview with sociologist Saskia Sassen on the “Age of Extraction”: “There is a multiplication of sharp breaking points that can be thought of as systemic edges. Once crossed you are in a different space; it is not simply a less agreeable or livable zone, as might be the spaces of social exclusion. It is far more radical: you are out.” Link.
- A look at the twenty nine mega-regions driving the global economy, analyzed by Richard Florida. Link.
- A recording of historian Lorraine Daston’s February SSRC Fellow Lecture: “Long before there were computers or even reliable calculating machines, there were algorithms, recipes, and other rigid rules. But for just as long, stretching back to ancient Greece and Rome and continuing through the Enlightenment of the eighteenth century, the rule-as-algorithm coexisted peacefully and fruitfully with another idea of a rule: the rule-as-pattern. For us, who live in the age of algorithms, this centuries-long cohabitation between the most rigid of rules—the algorithm to be followed to the letter—and the most supple of rules—the pattern or model to be imitated but not slavishly copied—seems paradoxical.” With responses from JFI Letter mainstays Helen Nissenbaum and Frank Pasquale. Link.
- Zeynep Tufekci responds to Mark Zuckerberg’s blog post announcing a new, privacy-focused direction for Facebook: “The plan, in effect, is to entrench Facebook’s interests while sidestepping all the important issues.” Link.
- A 2017 report by the Georgetown Center on Education and the Workforce states that selective colleges can afford to admit on average 20% more Pell Grant Recipients. The report finds “that about 86,000 Pell Grant recipients score at or above the median on standard tests for selective colleges, but do not attend them.” Link.
- The Institute on Taxation and Economic Policy released data this week indicating that state and local taxes move national income from the poor to the rich. The report includes this striking figure: “45 states worsened inequality by taxing low-income households at a higher effective rate.” Link to the report, link to coverage in the Washington Post. ht Lauren
- “Given that there are almost no tenure-track jobs, the majority of the next generation of intellectuals—like my own generation—will probably have to look outside the university for employment, and policy making is a sphere that could benefit from the academic’s commitment to empirics, empathy, and contingency.” University of Washington’s Daniel Bessner on the “public intellectual” in a time of degraded academia. Link.
- In May of 2018, the Federal Reserve Bank of St. Louis held a conference on returns to a college education. The papers presented consider the value of college premiums, and outline how family income dynamics and wealth gaps shape these calculations. Link. Click here to see related research by JFI Director of Researcher Sidhya Balakrishnan and Senior Fellow Barry Cynamon.
- Neil Cummins studies English inequality with 60 million death records: “This paper analyses a newly constructed individual level dataset of every English death and probate from 1892-2016. This analysis clearly shows that the 20th century’s ‘Great Equalization’ of wealth stalled in mid-century. Despite the large declines in the wealth share of the top 1%, from 73% to 20%, the median English person died with almost nothing throughout. All changes in inequality after 1950 involve a reshuffling of wealth within the top 30%.” Link.
Each week we highlight research from a graduate student, postdoc, or early-career professor. Send us recommendations: firstname.lastname@example.org.