January 30, 2020

Analysis

The Long History of Algorithmic Fairness

Fair algorithms from the seventeenth century to the present

This text by Rodrigo Ochigame is a companion to our recent interview with Lorraine Daston.

As national and regional governments form expert commissions to regulate “automated decision-making,” a new corporate-sponsored field of research proposes to formalize the elusive ideal of “fairness” as a mathematical property of algorithms and their outputs. Computer scientists, economists, lawyers, lobbyists, and policy reformers wish to hammer out, in advance or in place of regulation, algorithmic redefinitions of “fairness” and such legal categories as “discrimination,” “disparate impact,” and “equal opportunity.”)1 More recently, salutary critical voices, warning against the limits of such proposals, have proliferated both within and without the loosely networked field.2

But general aspirations to fair algorithms have a long history. In these notes, I recount some past attempts to answer questions of fairness through the use of algorithms. My purpose is not to be exhaustive or completist, but instead to suggest some major transformations in those attempts, pointing along the way to scholarship that has informed my account.

Fair algorithms since the seventeenth century

In a broad sense, “algorithmic fairness” may refer generally to any use of algorithms seeking to achieve fairness in the resolution of social disputes. “Algorithm” derives from the late medieval Latin “algorismus,” from the name of the Islamic mathematician Al-Khwārizmī, whose manuscripts in Arabic described the Indian system of arithmetic. The word’s meaning eventually developed into the technical definition employed in today’s computer science: “any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output.”3

According to linguist Anna Wierzbicka, “fairness” may denote a set of “cultural assumptions” regarding “the regulation of [human] life effected by stated and unstated rules of interaction,” rules that most interactants see as “generally applicable” and “reasonable.” This meaning in modern English developed in tandem with Anglo-American political philosophy, and “can be seen as related to the post-Enlightenment move away from metaphysics and to the shift from an ethics based on religion to a ‘procedural morality’ and to an ethics based on ‘reason,’ ‘social cooperation,’ and ‘each participant’s rational advantage.’”4

In the broad sense opened up by these definitions, the idea of fairness-by-algorithm dates back at least to the seventeenth century. In the narrow sense produced in recent scholarship, algorithmic fairness is often understood to refer specifically to the algorithmic risk classification of people, involving some mathematical criterion of fairness as a complementary consideration or constraint to the usual optimization of utility. My notes below move roughly chronologically, from the broader idea to the narrower concept.

Since ancient times, moral theorists have formulated conceptions of justice on the basis of mathematical ideas. Aristotle discussed distributive and corrective justice in terms of geometrical and arithmetical proportion respectively.5 But it was only in the early modern period that more systematic efforts to use mathematical calculations to resolve political conflicts about justice and fairness emerged. For example, in seventeenth-century England, competing methods for calculating the “present value” of future property sought to answer a question of fairness: how much a piece of property to be exchanged in the future ought to be worth in the present. Historian William Deringer reports that as early as the 1620s, mathematicians developed “present value” tables for determining fair terms for certain agricultural leases, especially of land owned by the Church of England.6 In 1706, such mathematical calculations became relevant to a major political question: how to balance the terms of the potential constitutional union between England and Scotland. The debate centered on how to determine the fair value of a proposed monetary “Equivalent” that was to be paid up front by the English government to Scottish stakeholders in compensation for higher future tax rates. On the basis of a calculation of compound-interest discounting, the English agreed to pay £398,085 and 10 shillings.7 Multiple factors enabled this puzzling possibility of resolving an important political conflict through an elaborate mathematical calculation, including the persistent legacy of Aristotelian ethics and the weakening of the Church as an institution of public trust.

In the Enlightenment, ethical questions were central to the initial articulation of probability theory and to the early imagination of calculating machines.8 Blaise Pascal’s calculations of equivalent expectations, which formed the basis for subsequent ideas of probability, were motivated by questions of fairness in apportionment—in particular, the fair division of the stakes in gambling or of the profits expected from business contracts with uncertain outcomes, like those in insurance and shipping. Even Pascal’s famous correspondence with Pierre de Fermat in 1654, which reputedly debuted the mathematics of probability, discussed the question of how to fairly divide the stakes in a game of chance that was interrupted.9 Statistician and historian Alain Desrosières writes that Pascal, in proposing his mathematical method of adjudication, not only “borrowed from the language of jurists” but also “created a new way of keeping the role of arbiter above particular interests, a role previously filled by theologians.”10 Similarly, Gottfried Leibniz, who designed an early calculating machine, sought to develop a universal calculus of reason based on an unambiguous formal language—with the hope that it would resolve moral disputes:

But to go back to the expression of thoughts through characters, this is my opinion: it will hardly be possible to end controversies and impose silence on the sects, unless we recall complex arguments to simple calculations, [and] terms of vague and uncertain significance to determinate characters… Once this has been done, when controversies will arise, there will be no more need of a disputation between two philosophers than between two accountants. It will in fact suffice to take pen in hand, to sit at the abacus, and—having summoned, if one wishes, a friend—to say to one another: let us calculate.11

In the eighteenth century, probabilists such as Marquis de Condorcet and Pierre-Simon Laplace extended the dreams of Pascal and Leibniz by developing a probabilistic approach to the moral sciences, including jurisprudence. As historian Lorraine Daston documents, these “classical” probabilists borrowed legal language to describe their mathematical formalisms, and tried to reform judicial practice in domains as diverse as criminal law and contract law.12 A remarkable case of the legal adoption of mathematical probability was the transformation of “contractual fairness” in English law beginning in the second half of the eighteenth century, which led to the development in 1810 of a rule under which “contracts for the sale of reversions could be rescinded solely on the ground of any deviation from the fair price.” Indeed, as historian Ciara Kennefick observes, the mathematics of early probability and the law of contractual fairness in equity influenced each other.13

The attempts to mathematize the moral sciences and to develop probabilities of “testimony” and “judgment” eventually faced various criticisms in the early nineteenth century. Some critics focused on questions of accuracy; others had deeper objections. As Daston explains, the stronger criticisms “reflected a profound shift in assumptions concerning the nature of the moral phenomena to which the probabilists sought to apply their calculus rather than any distrust of mathematics per se.” On the ground of “good sense,” a new wave of social scientists rejected both the associationist psychology of the moral sciences and the reductionist morality of the classical probabilists. The probabilities of testimony and judgment disappeared from standard texts and treatises on probability. The assessment of courtroom evidence, as well as the design of tribunals, became qualitative matters once again. By the mid-nineteenth century, the probabilistic approach to the moral sciences had fallen out of fashion.14 Nevertheless, probabilistic and statistical calculations continued to ground many kinds of normative claims about society. At the end of the Napoleonic era, the sudden publication of large amounts of printed numbers, especially of crimes, suicides, and other phenomena of deviancy, revealed what philosopher Ian Hacking calls “law-like statistical regularities in large populations.” These regularities in averages and dispersions gradually undermined previously-dominant beliefs in determinism, and the idea of human nature was displaced by “a model of normal people with laws of dispersion.”15

Actuarialism and racial capitalism

Insurance was a central domain for the institutionalization of aspirations to algorithmic fairness. A crucial episode was the emergence of the modern concept of “risk.” In the nineteenth-century United States, risk became part of everyday language along with the rise of institutions of corporate risk management. Previously, the term had referred simply to the commodity exchanged in a marine insurance contract. The transformation of the concept of risk happened largely in antebellum legal disputes over marine insurance liability for slave revolts in the Atlantic Ocean. In an illustrative case, the Louisiana Supreme Court considered the question of insurance liability after black slaves on the Creole ship, en route from Norfolk to New Orleans in 1841, mounted a successful insurrection and sailed to freedom in the Bahamas. The Court ruled that the successful revolt voided the insurance contract. The Court’s argument rested on an incipient link between freedom, self-ownership, and risk. As historian Jonathan Levy puts it, a slave’s “fate belonged to his or her master and the ‘risk’ commodified that destiny as the master’s private property.” But when the Creole slaves revolted successfully, they gained their freedom and thereby repossessed their own personal “risks.”16 This idea of the personal assumption of risk later enabled the practice of individualized risk classification, for example in life insurance.

Risk classification soon surfaced controversies over racial discrimination: in 1881, life insurance corporations started to charge differential rates on the basis of race. Unlike cooperative insurers, whose policyholders paid the same rates regardless of age or health or race, the corporate insurance firms Prudential and Metropolitan imposed penalties on African American policyholders. When civil rights activists challenged this policy, the corporations claimed differences in average mortality rates across races as justification. According to historian Dan Bouk, in 1884, Massachusetts state representative Julius C. Chappelle—an African American man born in antebellum South Carolina—challenged the fairness of the policy and proposed a bill to forbid it. The bill’s opponents invoked statistics of deaths, but Chappelle and his allies reframed the issue in terms of the future prospects of African Americans, emphasizing their potential for achieving equality. This vision for the future prevailed over the opposition’s fatalistic statistics, and the bill passed. After the victory in Massachusetts, similar bills passed in Connecticut, Ohio, New York, Michigan, and New Jersey.17 In the United States, racial discrimination has been not only an effect of institutional policies based on risk classification, but often their very motivation.

In the nineteenth century, statistical claims were typically based on population averages, since the major tools of modern mathematical statistics—correlation and regression—emerged only just before the twentieth. These tools, developed by eugenicists Francis Galton and Karl Pearson, facilitate the analysis of differences between individuals.18 Throughout the twentieth century, mathematical statistics transformed the human sciences, as well as the operations of capitalist firms and states in diverse domains besides insurance. The rest of my notes focus on systems of risk classification, which are often called “actuarial” because of their origins in insurance. (Beyond actuarial domains, early-twentieth-century aspirations to fairness-by-algorithm were varied, ranging from the emergence of cost-benefit analysis in the U.S. Army Corps of Engineers, documented by historian Theodore Porter,19 to debates on congressional redistricting and partisan gerrymandering, studied by historian Alma Steingart.20)

After World War II, mathematical models of optimization, influenced by the theory of “expected utility” by mathematician John von Neumann and economist Oskar Morgenstern, expanded the uses of statistical methods in the human sciences and of actuarial systems in capitalist institutions.21 Intellectually, this process was part of what Daston and her colleagues describe as the emergence of a “Cold War rationality” characterized by rigid rules, distinct from previous modes of Enlightenment reason that had been grounded in human judgment and mindful deliberation.22 Politically, the expansion of actuarialism is sometimes linked to postwar neoliberalism; sociologists Marion Fourcade and Kieran Healy write that “in the neoliberal era market institutions increasingly use actuarial techniques to split and sort individuals into classification situations that shape life-chances.”23 The employment of digital computers is only part of this history, as statistical/actuarial computations were performed primarily by human workers, often women, until the late twentieth century—in line with larger patterns of gendered labor mapped by historians of computing like Jennifer Light, Nathan Ensmenger, and Mar Hicks.24

Although experiments with individualized risk classification in U.S. police departments and credit bureaus started in the first half of the twentieth century, these actuarial methods became pervasive only in the second half. According to historian Josh Lauer, the widespread adoption of statistical credit scoring in consumer credit bureaus began in the 1960s, when calculations of creditworthiness were marketed as a replacement for evaluations that still relied largely on reports of “character” based on personal interviews.25 Social scientist Martha Poon demonstrates that in the 1970s, the seller of credit “scorecards” Fair, Isaac & Company deployed a discourse of statistical objectivity to avoid a proposed extension of anti-discrimination legislation that would ban the use of such scorecards, and to establish statistical scoring as the appropriate method of demonstrating compliance with the definition of fairness in the law.26

In the penal system, early trials of actuarial risk assessment began in the 1920s and 1930s, when Chicago School sociologists proposed the use of regression analysis for parole decisions in Illinois. However, as critical theorist Bernard Harcourt shows, these actuarial methods started to diffuse nationwide only in the 1980s, as part of a broader set of policies that operationalized pretrial and sentencing decisions, implementing a penal strategy of “selective incapacitation.”27 Although the relationship between actuarialism and mass incarceration is complex, it is worth noting that the progressive adoption of actuarial methods coincides with the dramatic increase of the U.S. prison population since the 1980s and with the penological shift towards targeted interventions of crime control and risk management, away from midcentury policies of welfare provision.28

In the 1970s, at the height of controversies surrounding redlining, U.S. civil rights and feminist activists argued that risk classification in the pricing of insurance was unfair and discriminatory. To protect itself, the insurance industry disseminated the concept of “actuarial fairness”: the idea that each person should pay for her own risk. The industry promoted this anti-redistributive concept of actuarial fairness in campaigns and advertisements, trying to convince Americans that risk classification in private insurance was inherently “fair”—unmarked by the kind of discrimination that had been outlawed with the Civil Rights Act. As historian Caley Horan discusses in forthcoming work, the industry posed fairness as a complex technical matter beyond the grasp of activists, and risk classification as an apolitical process based on objective calculations. By the early 1980s, the industry’s strategy of promoting actuarial fairness had effectively defeated the efforts of civil rights and feminist activists to pass federal unisex insurance legislation.29

The moral crisis at present

We are in the midst of another moral crisis of actuarial systems. This crisis is broader in scope, since it is framed in more general terms following commercial rebrandings: “algorithms,” “big data,” “artificial intelligence,” “automated decision-making,” and so on. It is also greater in magnitude, since actuarial/algorithmic systems have become ubiquitous in the age of digital computing, along with the rise of a highly instrumental approach to statistics and machine learning that historian Matthew Jones terms “data positivism.”30 Once again, civil rights and feminist activists are advancing arguments to expose discrimination and injustice. Again, there are proposals for legal regulation. And again, corporations are hard at work to evade and contain regulatory efforts, and to prescribe technical definitions of fairness for strategic purposes. Public discourse is saturated with reformist projects and designs that “aim to fix racial bias but end up doing the opposite”—as sociologist Ruha Benjamin observes.31

Nevertheless, there are key differences in the technical proposals at play. In the 1970s, proponents of “actuarial fairness” simply equated it with predictive accuracy; they posed fairness as equivalent to the optimization of utility in risk classification. Today, proponents of “algorithmic fairness” tend to define fairness and utility as distinct, often competing, considerations. Fairness is generally considered a complementary consideration or constraint to the optimization of utility, and proponents often speak of “trade-offs” between fairness and utility. This distinction responds to a widespread recognition that the conventional optimization of utility in actuarial systems—typically the maximization of profit or the minimization of risk—can be inherently unfair or discriminatory. The emerging debate on algorithmic fairness may be read as a response to this latest moral crisis of computationally managed racial capitalism.32

To return to the semantic analysis from the beginning of my notes, debates over the meaning of “fairness” reveal a tension between the stated and unstated rules of interaction that constitute its meaning. When corporate lobbyists and researchers try to prescribe a definition of fairness, they keep some issues unstated while pretending that what is plainly stated is exhaustive of the problems under discussion. Hence proponents of “actuarial fairness” in the 1970s, sponsored by insurance firms, framed the problems of discrimination and injustice as reducible to the stated issue of inaccurate prediction, while leaving unstated the political struggles over the model of private insurance and the use of risk classification to begin with. Today’s champions of “algorithmic fairness,” sometimes sponsored by Silicon Valley firms, tend to frame discrimination and injustice as reducible to the stated distinction between the optimization of utility and other mathematical criteria, while leaving unstated the ongoing political struggles over legally enforceable restrictions to actuarial systems and to new technologies such as facial recognition and automated targeting in drone warfare.

Algorithmic fairness should be understood not as a novel invention, but rather as an aspiration that reappears persistently in history. Many iterations have appeared throughout the modern period, each involving efforts to prescribe certain algorithms as inherently fair solutions to political conflicts. Each time, these efforts seek to reform judicial practice and to incorporate such prescriptions into the law. Yet, each time, affected people organize collective resistance against the prescribed definitions of fairness. The conflicts and definitions are increasingly complex, as each iteration has inherited ever more assumptions from the last. At present, a critical interrogation of those entrenched assumptions is urgently necessary. And the most consequential assumptions are those that the profiteers of racial capitalism prefer to keep unstated.


  1. For example, see: Cynthia Dwork et al., “Fairness through Awareness,” in Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, ITCS ’12 (Cambridge, MA: Association for Computing Machinery, 2012), 214–226; Jon Kleinberg et al., “Algorithmic Fairness,” AEA Papers and Proceedings 108 (May 2018): 22–27. 
  2. For example, see: Anna Lauren Hoffmann, “Where Fairness Fails: Data, Algorithms, and the Limits of Antidiscrimination Discourse,” Information, Communication & Society 22, no. 7 (June 7, 2019): 900–915; Bogdan Kulynych et al., “POTs: Protective Optimization Technologies,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT ’20* (Barcelona: Association for Computing Machinery, 2020), 177–188. 
  3. Thomas H. Cormen et al., Introduction to Algorithms, 2nd ed. (Cambridge, MA: MIT Press, 2001), 5. 
  4. Anna Wierzbicka, English: Meaning and Culture (Oxford: Oxford University Press, 2006), 152–54. 
  5. M. F. Burnyeat, “Plato on Why Mathematics Is Good for the Soul,” Proceedings of the British Academy 103 (2000): 1–81. 
  6. William Deringer, “Just Fines: Mathematical Tables, Ecclesiastical Landlords, and the Algorithmic Ethic circa 1628” (unpublished manuscript, 2020). 
  7. William Deringer, Calculated Values: Finance, Politics, and the Quantitative Age (Cambridge, MA: Harvard University Press, 2018), 79–114. 
  8. Matthew L. Jones, The Good Life in the Scientific Revolution: Descartes, Pascal, Leibniz, and the Cultivation of Virtue (Chicago: University of Chicago Press, 2006). 
  9. Ian Hacking, The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference (London: Cambridge University Press, 1975). 
  10. Alain Desrosières, The Politics of Large Numbers: A History of Statistical Reasoning, trans. Camille Naish (Cambridge, MA: Harvard University Press, 1998), 45–66. 
  11. Quoted in Maria Rosa Antognazza, Leibniz: An Intellectual Biography (Cambridge: Cambridge University Press, 2009), 244. 
  12. Lorraine Daston, Classical Probability in the Enlightenment (Princeton: Princeton University Press, 1988). 
  13. Ciara Kennefick, “The Contribution of Contemporary Mathematics to Contractual Fairness in Equity, 1751–1867,” The Journal of Legal History 39, no. 3 (September 2, 2018): 307–39. 
  14. Daston, Classical Probability, 296–369. 
  15. Ian Hacking, The Taming of Chance (Cambridge: Cambridge University Press, 1990). 
  16. Jonathan Levy, Freaks of Fortune: The Emerging World of Capitalism and Risk in America (Cambridge, MA: Harvard University Press, 2012) 
  17. Daniel B. Bouk, How Our Days Became Numbered: Risk and the Rise of the Statistical Individual (Chicago: University of Chicago Press, 2015), 31–53. 
  18. Theodore M. Porter, The Rise of Statistical Thinking, 1820–1900 (Princeton: Princeton University Press, 1986), 270–314. 
  19. Theodore M. Porter, Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (Princeton: Princeton University Press, 1995), 148–189. 
  20. Alma Steingart, “Democracy by Numbers,” Los Angeles Review of Books, August 10, 2018. 
  21. Paul Erickson, The World the Game Theorists Made (Chicago: University of Chicago Press, 2015). 
  22. Paul Erickson et al., How Reason Almost Lost Its Mind: The Strange Career of Cold War Rationality (Chicago: University of Chicago Press, 2013). 
  23. Marion Fourcade and Kieran Healy, “Classification Situations: Life-Chances in the Neoliberal Era,” Accounting, Organizations and Society 38, no. 8 (November 1, 2013): 559–72. 
  24. Jennifer S. Light, “When Computers Were Women,” Technology and Culture 40, no. 3 (1999): 455–83; Nathan Ensmenger, The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise (Cambridge, MA: MIT Press, 2010); Mar Hicks, Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing (Cambridge, MA: MIT Press, 2018). 
  25. Josh Lauer, Creditworthy: A History of Consumer Surveillance and Financial Identity in America (New York: Columbia University Press, 2017). 
  26. Martha A. Poon, “What Lenders See: A History of the Fair Isaac Scorecard” (PhD diss., UC San Diego, 2012), 167–214. 
  27. Bernard E. Harcourt, Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age (Chicago: University of Chicago Press, 2007). 
  28. David Garland, The Culture of Control: Crime and Social Order in Contemporary Society (Chicago: University of Chicago Press, 2001). 
  29. Caley Horan, Insurance Era: The Postwar Roots of Privatized Risk, forthcoming. 
  30. Matthew L. Jones, “How We Became Instrumentalists (Again): Data Positivism since World War II,” Historical Studies in the Natural Sciences 48, no. 5 (November 2018): 673–84. 
  31. Ruha Benjamin, Race after Technology: Abolitionist Tools for the New Jim Code (Medford: Polity, 2019). 
  32. On the term “racial capitalism,” see Cedric J. Robinson, Black Marxism: The Making of the Black Radical Tradition (Chapel Hill: University of North Carolina Press, 2000). 
Further Reading
Data as Property?

On the problems of propertarian and dignitarian approaches to data governance.

Historicizing the Self-Evident

An interview with Lorraine Daston

Direct Effects

How should we measure racial discrimination?


Since the proliferation of the World Wide Web in the 1990s, critics of widely used internet communications services have warned of the misuse of personal data. Alongside familiar concerns regarding…

Read the full article


Lorraine Daston has published widely in the history of science, including on probability and statistics, scientific objectivity and observation, game theory, monsters, and much else. Director at the Max Planck…

Read the full article


A 2018 National Academy of Sciences report on American policing begins its section on racial bias by noting the abundance of scholarship that records disparities in the criminal justice system.…

Read the full article