September 26, 2019

Interviews

Optimizing the Crisis

Software that structures increasingly detailed aspects of contemporary life is built for optimization. Services like Uber, Airbnb, and Facebook require mapping the world in a way that is computationally legible, and translating the messy world into one that makes sense to a computer is imperfect. Even under ideal conditions, optimization systems—constrained, more often than not, by the imperatives of profit-generating corporations—are designed to ruthlessly maximize one metric at the expense of others. When these systems are optimizing over large populations, some people lose out in the calculation.

Official channels for redress offer little help: alleviating out-group concerns is by necessity counter to the interests of the optimization system and its target customers. Like someone who lives in a flight path but has never bought a plane ticket complaining about the noise to an airline company, the collateral damage of optimization has little leverage over the system provider, unless the law can be wielded against it. Beyond the time-intensive and uncertain path of traditional advocacy, what recourse is available for those who find themselves in the path of optimization?

In their 2018 paper POTs: Protective Optimization Technologies (updated version soon forthcoming at this same link), authors Rebekah Overdorf, Bogdan Kulynych, Ero Balsa, Carmela Troncoso, and Seda Gürses offer some answers. Eschewing the dominant frameworks used to analyze and critique digital optimization systems, the authors offer an analysis that illuminates fundamental problems with both optimization systems and the proliferating literature that attempts to solve them.

POTs, as an analytical framework and a technology, suggest that the inevitable assumptions, flaws, and rote nature of optimization systems can be exploited to produce “solutions that enable optimization subjects to defend from unwanted consequences.” Despite their overbearing nature, optimization systems typically require some degree of user input; POTs use this as a wedge for individuals and groups marginalized by the optimization system to influence its operation. In so doing, POTs find a way to restore what optimization seeks to hide, revealing that what gets laundered as technical problems are actually political ones.

Below, we speak with Seda Gürses and Bekah Overdorf, two members of the POTs team, who discuss the definition of optimization system, the departures the POTs approach makes from the digital ethics literature, and the design and implementation of POTs in the wild.

An interview with Seda Gürses and Bekah Overdorf

Francis Tseng: How did each of you come to the POTs project?

Seda Gürses: The POTs project came out of work I was doing with Martha Poon and Joris Van Hoboken looking at the political economy of software infrastructures. Joris and I had been trying to understand how the software industry moved from what we call shrink-wrapped software in the 90s to what could be called the service-oriented architectures of the present, and from the “waterfall” to “agile” models of software development. Through our research, it became clear that these newer cloud-based services are distinct in their underlying logic from earlier information and communication technologies. Earlier systems aspired to enable knowledge-making and gathering, were designed to collect, process, store, and communicate information, and raised concerns primarily around surveillance. Newer systems use feedback from how users or environments react to them to optimize for various goals like software features, the organization of engineering teams, the use of resources, and so on. What is distinct about these new systems is not only that they have this feedback mechanism, but that they adhere to a logic of optimization—the goal is not simply to know things, but to be able to show that you can optimize behavior and environments for a certain kind of profit interest, broken down to various metrics.

From there, the question became: if the issue at hand isn’t knowledge per se—meaning that the associated risk is not just surveillance—what are the kinds of new risks and harms posed by optimization systems? We began by building on literature from AI safety, and we thought, well, these risks and harms are already here—what if we start listing them and thinking about technologies that could respond to them and protect people? The key thing is that protection here, unlike with privacy, is not just for individuals, but for groups of people, collectives of people and their environments. Where privacy gets you to focus on personal data and the profiling of individuals, optimization gets you thinking about how groups of people and entire geographies are optimized for profit.

Bekah Overdorf: My background is generally in security and privacy: applying machine learning to attack private systems, and studying how machine learning can be used first to attack a system, and then to defend that system against attacks. I came into POTs from this angle, through talking about how attacks on systems that use machine learning could be conceived of as positive attacks on a given system. We have three more members on our team: Ero Balsa, who works on obfuscation; Bogdan Kulynych, who studies adversarial machine learning; and Carmela Troncoso, who works on privacy and cryptography.

That’s kind of how POTs came to be what it is now, and what the team looks like.

ft: To situate the discussion, could you describe in your terms what an optimization system is, as it relates to POTs?

sg: We define optimization systems as systems that sense and manipulate the world—that co-create the world—for the purpose of extracting value.

The high-level example I give during talks is that, a long time ago, when you went into a shop and bought a shrink-wrapped box of software, for example Microsoft Word, you’d install it on your machine, and any usage metrics and files you created just stayed on your machine. When software was released, the developers had to freeze and maintain a static version of the binary, and the way the software was being used wasn’t constantly communicated back to Microsoft’s servers. With something like Google Docs—or really any other service you want to imagine—what you have is every interaction being metricized continuously, and those interactions being used to decide how to modify that service.

bo: It goes beyond the update cycle changing. The feedback mechanism means that what you and others are doing in the world is continuously affecting the way that the system operates, so the system is constantly optimized along a bunch of changing variables, including people who are not using the system, but whose inputs it nevertheless is taking in.

Jack Gross: Could you speak further to the distinction you made between privacy being focused on the individual’s information, while the concerns that are raised with optimization technologies, and which you target with POTs, are oriented towards groups and environments? How does this relate to the shift towards a service-oriented model?

sg: Although there are many theorists who have talked about social privacy—like Priscilla Regan and Helen Nissenbaum, whose idea of contextual integrity comes to mind—the actually dominant framework for data protection laws and conceptions of privacy is the right to be left alone—the individual imagined to require some space apart from society, and individual control over one’s data flows. This is true for privacy technologies as well which, although they may have had the notion and effect of protecting collectivities, are built around individuals’ communications. We see this in encryption, which, although involving multiple people in contact with one another, doesn’t scale up particularly well to groups. It’s very difficult to protect group communications, and I see this being inherent to technology, law, and philosophy—all three of which operate from a logic of individual privacy protection.

There’s an essay called “The Smartness Mandate” by Orit Halpern that discusses the shift from systems addressing individuals to addressing populations—even as “personalization” becomes the watchword of the day. They optimize over populations, which means that their errors are distributed across populations. Individuals are individually impacted, but the systems are concerned with the population-wide effects they can achieve.

bo: The insight here is that, if you’re optimizing for some particular goal, you’re going to have people who lose out in the system. The goal of optimization systems is to optimize for the largest, or most valuable, group of people—in that sense, many individuals are left out. And, to push back on your question a little, I think that POTs can be created to help individuals who are negatively affected by these strategies, in the same way that PETs (Privacy-Enhancing Technologies) can.

jg: Can you describe the POTs approach, and walk through how a POT might work in practice?

bo: Say you have a credit scoring system that’s inherently biased against some group of people. We have two different types of attacks—two POTs if you will—that we can use on the system. The first is a machine learning tool that does what is called data poisoning. The target system is running models that are regularly updated and trained with new data as it comes in. In the case of the biased credit scoring system we would strategically take out and repay loans in such a way that the retrained model would be debiased against this target group.

This is similar to the techniques that spammers use to get their spam through email filters. They shove enough spam—perfectly and artificially crafted spam—into a model such that it now thinks that spam is not spam. It’s that same approach that we’re talking about here, where we can manipulate variables in order to de-bias the target machine learning algorithm from the outside, without ever having direct control over it. This is maybe an unrealistic attack because it requires a bunch of people to take out potentially bad loans, depending on the situation, but it’s a very nice illustration of how we can use adversarial machine learning techniques to buck systems we don’t agree with, or that have inherent biases and problems.

The second type of attack you can do here is called an adversarial example, which is another adversarial machine learning tool, where you create the perfect sample with the smallest edit distance that you would submit to the system that lets you pass, even with the variable you can’t change still unchanged. For example, if the system is not giving loans out to women at the same rate as it is to men, you can change the variables of the inputs such that, even without changing the gender field, it will still let your loan through. This kind of tool gives you the minimum amount of changes that you would need to produce that effect. Here we see the distinction between individualistic POTs, like this one, and the more collective POTs, like the former example.

sg: Maybe I can add that when we implement a POT, we may sometimes find out that the intervention is not working or is situated at the wrong place. Talking with Martha Poon, who studied the 2008 financial crisis, we came to understand that current lending practices do not exclude high-risk individuals. On the contrary, lenders use dynamic pricing models, a form of optimization, that favor giving “high-risk” populations loans at disproportionately higher interest rates. As a result, the financially marginalized pay more for access to money and are included in a system that during the last financial crisis caused mass dispossession. Is a POT that enables inclusion—even if it were to enable access to loans at lower interest rates—in a system that causes dispossession, what we want? I don’t think we can answer this question very easily. It is also interesting that, in comparison, “what is it that a POT should do?” was somewhat easier to address in the case of Waze, the other example, where we have plenty of anecdotes of people contesting Waze that we can translate into a POT. Maybe we can talk a little bit about that.

bo: Sure. Waze is a routing app, like Google Maps, that has crowdsourced elements like reporting on traffic accidents, traffic lights, speed traps, and so on. And, anecdotally, it’s known to be more aggressive in its routing. As we know, pre-optimization systems, we could have routing algorithms give us directions, like MapQuest, but once we were on the road we were on our own—they didn’t know what the traffic was. Now we have a feedback system that’s meant to be constantly adapting to variables in the real environment. From this we get what economists call externalities—and the negative externalities of a system like Waze are many. There are many reported instances of little towns just off of highways getting inundated with traffic, because Waze would route users away from the slow-moving highway. So it’s the people in these communities that lose out in this optimization process: suddenly their roads are overburdened, they have trouble getting to school or the grocery store on time, maybe they have to finance more road maintenance, and so on. Very early in our POTs work we found these incredible examples of people fighting against Waze by reporting traffic on their blocks where there was none, so that the app wouldn’t route people there as a shortcut.

There was also a really good spoofing attack where researchers spoofed the locations of a bunch of phones with the Waze app open so that the app thought that the phones were in the middle of a highway, kind of crawling along. This made Waze think that there was a lot of traffic on this highway and it routed everyone off of it. There was actually much less traffic on the highway since all of the Waze users were routed elsewhere.

We used these interventions as inspiration to develop a POT to change real life variables inside of the town—things like speed limits on roads, artificial traffic essentially—to penalize the route through the town enough such that the routing algorithm sends it elsewhere. This illustrates the key element of POTs: these systems are using technology to optimize, and POTs are a technological response to that. Routing algorithms use graphing algorithms, which look at the world as though it’s a graph, with nodes and edges with weights, and they create routes on these roads using a variety of algorithms to find the fastest route. Our response is to look at the world in the same way they do—look at the world as a graph—and see if we can then figure out how to change the town, or to penalize the route through the town such that it’s no longer worth exploiting.

sg: This is one strategy, wherein you work against the fact that optimization systems simply do not care about non-users and the broader environment. POTs takes a more holistic approach: we identify the costs of a given optimization system, and find ways to diminish them or, if that’s not possible, to make the lives of people who benefit from that optimization system difficult enough that they begin to complain to the service provider, such that everyone benefits.

Another strategy is to more directly sabotage a given optimization system so that it just gives random useless results, forcing users to drop the service. These are just a couple options.

ft: In the paper you speculate about using POTs to prevent the introduction of optimization systems in the first place. Do you have any thoughts on what that could look like?

sg: There needs to be a bigger discussion about whether optimization should be the logic applied to everything around us. When we speak to computer scientists especially, and also people who have done traffic analysis, they say “we’ve always optimized, and what’s the problem, isn’t everything optimization anyways?” There’s a quick move from “we always optimize systems” to “the world is just a collection of optimizations,” by which they mean something like individuals optimize, societies optimize, systems optimize, birds optimize—everything optimizes. This is an extreme reduction of how life works, and an attempt to naturalize a particular way of organizing the world—namely, certain kinds of utilitarianism oriented by the profit motive.

I’m continuously struck, particularly in computer science, by how pervasive the idea is that these systems are optimizing over our individual preferences. We hear “Google gives you advertisements that fit your preferences,” or “we offer Waze users the best path.” Part of the effect of this is to suggest that these firms are in the process of doing some neutral or beneficent resource allocation, contributing to social welfare, when what they’re doing is profit calculation, with significant negative externalities.

bo: There is a second, harder to quantify political effect that we’re interested in the POTs work, which has to do with the power of these companies. If you’re a small town in New Jersey with a sudden influx of traffic, whereas before you might be able to talk to the highway administration on the state level, now you have to go to a private company and convince them that their algorithm is hurting your town. That’s a much different strain on governance than we’ve seen in previous analogous problems.

sg: The relationship with political institutions is important. There’s been growing resistance to the use of facial recognition in the US, which is an encouraging example of people simply saying “no, we don’t want these systems introduced in the first place.” This is great, but if you think about the fact that service architectures are already installed, with many facial recognition training databases and services out there, you’re basically relying on institutional mechanisms to make sure these things don’t get deployed. The problem is that optimization takes political contention and turns it into a mathematical issue, arguing that a system is “optimal” and it’s therefore not always easy to argue or regulate such systems from a normative standpoint.

jg: I understand the POTs intervention as trying to require politics of technology, rather than allowing the two to be falsely separated. It seems like much of the literature in digital ethics falls into two broad camps: either attempting to turn questions of fairness and justice into optimization problems, by formalizing definitions of fairness; or else by pointing out the serious problems with existing technologies and then advocating for corporations or political institutions to place limits on their use. How do you see the POTs work in relation to these two tendencies?

bo: What we’ve seen in the fairness literature is a focus on what the service providers, and their developers, can do to systems that they’re creating or deploying in order to make them more fair. What POTs argues is that companies and services providers and often the developers themselves do not have the incentives to create fair models, because they are already optimized to the variables that the service provider needs them to be optimized for—namely, increased user base, increased profits. (There’s some research out of CMU by Kenneth Holstein that looks at these dynamics.)

POTs is a direct reaction to that, which is to say: if we can’t trust the service provider, at least in the immediate term, because they’re always going to be optimizing for their own variables above all else, what can we do from outside the system as technical people making technical solutions to try to counter these externalities?

sg: I think that one of the things that we imply in our work is that you can’t really improve optimization systems with more optimization. Much of the fairness work is exactly that—attempting to optimize for fairness criteria. This doesn’t put into question all of the categorization and classification of populations that needs to take place before the fairness definition is developed. It just proposes a more gentle way of manipulating those classifications—usually still for profit, or sometimes, in more speculative work, general social interest.

bo: Another practical element here is that companies are very likely to have an incentive to appear fair, but much less likely to actually be fair.

sg: Right. In the US, there’s the four-fifths rule, so often companies will go for simple models of fairness that satisfy existing criteria. No one is going to pursue a more general social justice as an outcome. There are limitations to how the legal system conceives of injustice, say, through discrimination, and a lot of the fairness community believes that the law gives them sufficient guidance. This is a shortcoming.

When we first presented this work, some people said, “It’s unethical, you’re messing with systems”—yeah, we’re showing you how the system is messing with you! Why did you never question the ethics of this optimization system piggybacking on tax-funded infrastructure, but you’re now questioning us? That happens less now, as more and more news emerges about these companies creating a mess.

If anything, as you say, it opens a conversation. These are big, systemic problems that POTs just tries to point at by saying there’s something going on here, by providing temporary relief, let’s say, or sabotage.

ft: What are your plans for the future of POTs? Do you have any plans to implement more of them?

bo: So right now we have a few that we’re actively working on creating, you could call them prototypes. We’ve especially been focusing on the Waze example and on credit scoring and how to counter continuously training systems like credit scoring and ad services.

jg: To push the question a bit further, I was thinking about the relationship between Francis’s work with the rhetorical software framework—projects like Bail Bloc—and POTs. Clearly a restrained use-case of counter-optimization technology isn’t going to save people from being punished for being poor by credit scoring agencies, and mining Monero isn’t going to make pre-trial detention obsolete. Do you see your work as operating in a similar rhetorical register, or do you envision broader practical applications based on your research?

sg: What’s really interesting is that, since we started doing the POTs talks, people come to us and say, “Oh, I guess I’ve been doing POTs already.” The fact that everyday users install Waze and report roadblocks, and understand what that does to the system, is a huge thing. There are lots of studies about how people don’t understand what the systems they use do, and how common understandings of these systems can be far from reality, but people are understanding ways to respond to them concretely in their lives.

At the Obfuscation Symposium at NYU in 2013, gender studies professor Susan Stryker said that we’re always obfuscated—we’re not clean, well-ordered beings. There will always be noise, and there will always be resistance or noncompliance. In that way I don’t think it’s simply a rhetorical device. I feel like POTs and what Francis is doing is just what always happens: optimization systems give rise to counter-optimization systems.

Now, I don’t think POTs are sufficient, I think we need to be working against this huge infrastructure of service-oriented architectures, the immense investment money funneled into optimization, and so on. It’s not sufficient to hit these smaller points, but it is necessary.

ft: My initial impression with POTs was that they present a technical solution to political problems dressed up as technical ones, and that, like other technical approaches, they would be biased towards individual or small group action. But some of the examples you work with are contingent on large-scale organizing and coordination—do you think the majority of POTs actually fall into this category?

sg: There are classes of POTs that you’re referring to, which is exemplified by Uber drivers turning off their apps in a synchronized manner.

ft: Yeah, that’s the one I was thinking of.

sg: We did see one article that showed that there’s actually an app that facilitates this—which shows how sticky all this infrastructure is, that you need to build an app to go against another app. Not to be too romantic about it, but my impression thus far is that the most successful organizing we’ve seen that can be described as POTs is when there is classical online and offline organizers: Uber drivers organizing face-to-face, Deliveroo riders organizing face-to-face. Or at least on platforms dedicated for this, like Turkopticon that was initiated by Lilly Irani and Six Silberman. When we contacted some Deliveroo organizers and said “could we build POTs for you,” it was clear we’d have to go to some of these people and ask them about their experiences, and now Carmela and Bekah and Bogdan are working with gig workers for another project, and there again we depend on their organizing to get that work done.

jg: I would love to hear any very speculative comments on places that you think you could or others could take POTs, especially thinking about where and how you could collaborate with people who are most directly affected by any particular optimization system.

bo: One area we haven’t explored very much, so this is super speculative, is predictive policing. Again, predictive policing is an optimization system, you have n officers who work m hours: how do you optimize their positions? What are the negative externalities of optimizing the system in such a way? There’s a lot of work going on right now from a policy perspective—how do we stop these practices—which is the bigger and important question. But there’s the smaller question of what we can do in the meantime to fight back against these systems. Is there a way to get a predictive policing algorithm to be less effective?

sg: We also had an AirBnB example, right?

bo: It’s not solved, which maybe makes it an interesting example. On AirBnB you can manually set the price of the apartment you’re renting out, but you can also have the price be set by the system itself. So this is optimizing for price vs vacancy: How low do you have to make the price to have people there all the time? We’ve been looking at places where AirBnB has tanked the housing market, so I think the best example of this is Barcelona, where they’re having a problem with AirBnB that they’re working on from a legal and legislative perspective. We were looking at what are the inputs to this system that we can manipulate such that the recommended price algorithm is sabotaged or made useless.

sg: I think these examples show, how, if I compare it with what Francis and Sam and Brian did with predicting financial crime, where you’re inverting the proposed objective function, you’re not just doing the inputs to the algorithm to arrive at a different outcome, but you’re literally proposing a different system, I think that’s where POTs could be said to be a bit conservative, because we assume that the system is there, we assume that what we can do is predominantly—not only—but predominantly give it unsolicited feedback, as we like to call it, that then gives us different results. The question is how far can you go.


The National Industrial Recovery Act is commonly counterposed to antitrust. But at the time, the antitrust camp had little truck with the self-coordinating market ideal. Resistance to public price coordination…

Read the full article


Since the discovery of some of the world’s largest oil reserves in 2015, Guyana has entered a period of economic and geostrategic reconfiguration. According to the Energy Information Administration (EIA),…

Read the full article


Cairo’s role in a US-backed regional security architecture makes the military dictatorship a regional giant too big to fail. The Sisi regime, like its predecessors, is keenly aware of this…

Read the full article