Read Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy Online

Authors: Cathy O'Neil

Tags: #Business & Economics, #General, #Social Science, #Statistics, #Privacy & Surveillance, #Public Policy, #Political Science

Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (5 page)

BOOK: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
13.27Mb size Format: txt, pdf, ePub
ads

Many would point out that statistical systems like the LSI–R
are effective in gauging recidivism risk—or at least more accurate than a judge’s random guess. But even if we put aside, ever so briefly, the crucial issue of fairness, we find ourselves descending into a pernicious WMD feedback loop. A person who scores as “high risk” is likely to be unemployed and to come from a neighborhood where many of his friends and family have had run-ins with the law. Thanks in part to the resulting high score on the evaluation, he gets a longer sentence, locking him away for more years in a prison where he’s surrounded by fellow criminals—which raises the likelihood that he’ll return to prison. He is finally released into the same poor neighborhood, this time with a criminal record, which makes it that much harder to find a job. If he commits another crime, the recidivism model can claim another success. But in fact the model itself contributes to a toxic cycle and helps to sustain it. That’s a signature quality of a WMD.

In this chapter, we’ve looked at three kinds of models. The baseball models, for the most part, are healthy. They are transparent and continuously updated, with both the assumptions and the conclusions clear for all to see. The models feed on statistics from the game in question, not from proxies. And the people being modeled understand the process and share the model’s objective: winning the World Series. (Which isn’t to say that many players, come contract time, won’t quibble with a model’s valuations: “Sure I struck out two hundred times, but look at my
home runs
…”)

From my vantage point, there’s certainly nothing wrong with the second model we discussed, the hypothetical family meal model. If my kids were to question the assumptions that underlie it, whether economic or dietary, I’d be all too happy to provide them. And even though they sometimes grouse when facing
something green, they’d likely admit, if pressed, that they share the goals of convenience, economy, health, and good taste—though they might give them different weights in their own models. (And they’ll be free to create them when they start buying their own food.)

I should add that my model is highly unlikely to scale. I don’t see Walmart or the US Agriculture Department or any other titan embracing my app and imposing it on hundreds of millions of people, like some of the WMDs we’ll be discussing. No, my model is benign, especially since it’s unlikely ever to leave my head and be formalized into code.

The recidivism example at the end of the chapter, however, is a different story entirely. It gives off a familiar and noxious odor. So let’s do a quick exercise in WMD taxonomy and see where it fits.

The first question: Even if the participant is aware of being modeled, or what the model is used for, is the model opaque, or even invisible? Well, most of the prisoners filling out mandatory questionnaires aren’t stupid. They at least have reason to suspect that information they provide will be used against them to control them while in prison and perhaps lock them up for longer. They know the game. But prison officials know it, too. And they keep quiet about the purpose of the LSI–R questionnaire. Otherwise, they know, many prisoners will attempt to game it, providing answers to make them look like model citizens the day they leave the joint. So the prisoners are kept in the dark as much as possible and do not learn their risk scores.

In this, they’re hardly alone. Opaque and invisible models are the rule, and clear ones very much the exception. We’re modeled as shoppers and couch potatoes, as patients and loan applicants, and very little of this do we see—even in applications we happily sign up for. Even when such models behave themselves, opacity can lead to a feeling of unfairness. If you were told by an usher,
upon entering an open-air concert, that you couldn’t sit in the first ten rows of seats, you might find it unreasonable. But if it were explained to you that the first ten rows were being reserved for people in wheelchairs, then it might well make a difference. Transparency matters.

And yet many companies go out of their way to hide the results of their models or even their existence. One common justification is that the algorithm constitutes a “secret sauce” crucial to their business. It’s
intellectual property
, and it must be defended, if need be, with legions of lawyers and lobbyists. In the case of web giants like Google, Amazon, and Facebook, these precisely tailored algorithms alone are worth hundreds of billions of dollars. WMDs are, by design, inscrutable black boxes. That makes it extra hard to definitively answer the second question: Does the model work against the subject’s interest? In short, is it unfair? Does it damage or destroy lives?

Here, the LSI–R again easily qualifies as a WMD. The people putting it together in the 1990s no doubt saw it as a tool to bring evenhandedness and efficiency to the criminal justice system. It could also help nonthreatening criminals land lighter sentences. This would translate into more years of freedom for them and enormous savings for American taxpayers, who are footing a $70 billion annual prison bill. However, because the questionnaire judges the prisoner by details that would not be admissible in court, it is unfair. While many may benefit from it, it leads to suffering for others.

A key component of this suffering is the pernicious feedback loop. As we’ve seen, sentencing models that profile a person by his or her circumstances help to create the environment that justifies their assumptions. This destructive loop goes round and round, and in the process the model becomes more and more unfair.

The third question is whether a model has the capacity to grow
exponentially. As a statistician would put it, can it scale? This might sound like the nerdy quibble of a mathematician. But scale is what turns WMDs from local nuisances into tsunami forces, ones that define and delimit our lives. As we’ll see, the developing WMDs in human resources, health, and banking, just to name a few, are quickly establishing broad norms that exert upon us something very close to the power of law. If a bank’s model of a high-risk borrower, for example, is applied to you, the world will treat you as just that, a deadbeat—even if you’re horribly misunderstood. And when that model scales, as the credit model has, it affects your whole life—whether you can get an apartment or a job or a car to get from one to the other.

When it comes to scaling, the potential for recidivism modeling continues to grow. It’s already used in the majority of states, and the LSI–R is the most common tool, used in
at least twenty-four of them. Beyond LSI–R, prisons host a lively and crowded market for data scientists. The penal system is teeming with data, especially since convicts enjoy even fewer privacy rights than the rest of us. What’s more, the system is so miserable, overcrowded, inefficient, expensive, and inhumane that it’s crying out for improvements. Who wouldn’t want a cheap solution like this?

Penal reform is a rarity in today’s polarized political world, an issue on which liberals and conservatives are finding common ground. In early 2015, the conservative Koch brothers, Charles and David, teamed up with a liberal think tank, the Center for American Progress, to push for prison reform and drive down the incarcerated population. But my suspicion is this: their bipartisan effort to reform prisons, along with legions of others, is almost certain to lead to the efficiency and perceived fairness of a data-fed solution. That’s the age we live in. Even if other tools supplant LSI–R as its leading WMD, the prison system is likely to be a powerful incubator for WMDs on a grand scale.

So to sum up, these are the three elements of a WMD: Opacity, Scale, and Damage. All of them will be present, to one degree or another, in the examples we’ll be covering. Yes, there will be room for quibbles. You could argue, for example, that the recidivism scores are not totally opaque, since they spit out scores that prisoners, in some cases, can see. Yet they’re brimming with mystery, since the prisoners cannot see how their answers produce their score. The scoring algorithm is hidden. A couple of the other WMDs might not seem to satisfy the prerequisite for scale. They’re not huge, at least not yet. But they represent dangerous species that are primed to grow, perhaps exponentially. So I count them. And finally, you might note that not all of these WMDs are universally damaging. After all, they send some people to Harvard, line others up for cheap loans or good jobs, and reduce jail sentences for certain lucky felons. But the point is not whether some people benefit. It’s that so many suffer. These models, powered by algorithms, slam doors in the face of millions of people, often for the flimsiest of reasons, and offer no appeal. They’re unfair.

And here’s one more thing about algorithms: they can leap from one field to the next, and they often do. Research in epidemiology can hold insights for box office predictions; spam filters are being retooled to identify the AIDS virus. This is true of WMDs as well. So if mathematical models in prisons appear to succeed at their job—which really boils down to efficient management of people—they could spread into the rest of the economy along with the other WMDs, leaving us as collateral damage.

That’s my point. This menace is rising. And the world of finance provides a cautionary tale.

 

Imagine you have a routine. Every morning before catching the train from Joliet to Chicago’s LaSalle Street station, you feed $2 into the coffee machine. It returns two quarters and a cup of coffee. But one day it returns four quarters. Three times in the next month the same machine delivers the same result. A pattern is developing.

Now, if this were a tiny anomaly in financial markets, and not a commuter train, a quant at a hedge fund—someone like me—could zero in on it. It would involve going through years of data, even decades, and then training an algorithm to predict this one recurring error—a fifty-cent swing in price—and to place bets on
it. Even the smallest patterns can bring in millions to the first investor who unearths them. And they’ll keep churning out profits until one of two things happens: either the phenomenon comes to an end or the rest of the market catches on to it, and the opportunity vanishes. By that point, a good quant will be hot on the trail of dozens of other tiny wrinkles.

The quest for what quants call market inefficiencies is like a treasure hunt. It can be fun. And as I got used to my new job at D. E. Shaw, I found it a welcome change from academia. While I had loved teaching at Barnard, and had loved my research on algebraic number theory, I found progress agonizingly slow. I wanted to be part of the fast-paced real world.

At that point, I considered hedge funds morally neutral—scavengers in the financial system, at worst. I was proud to go to Shaw, known as the Harvard of the hedge funds, and show the people there that my smarts could translate into money. Plus, I would be earning three times what I had earned as a professor. I could hardly suspect, as I began my new job, that it would give me a front-row seat during the financial crisis and a terrifying tutorial on how insidious and destructive math could be. At the hedge fund, I got my first up-close look at a WMD.

In the beginning, there was plenty to like. Everything at Shaw was powered by math. At a lot of firms, the traders run the show, making big deals, barking out orders, and landing multimillion-dollar bonuses. Quants are their underlings. But at Shaw the traders are little more than functionaries. They’re called executioners. And the mathematicians reign supreme. My ten-person team was the “futures group.” In a business in which everything hinges on what will happen tomorrow, what could be bigger than that?

We had about fifty quants in total. In the early days, it was entirely men, except for me. Most of them were foreign born. Many
of them had come from abstract math or physics; a few, like me, had come from number theory. I didn’t get much of a chance to talk shop with them, though. Since our ideas and algorithms were the foundation of the hedge fund’s business, it was clear that we quants also represented a risk: if we walked away, we could quickly use our knowledge to fuel a fierce competitor.

To keep this from happening on a large, firm-threatening scale, Shaw mostly prohibited us from talking to colleagues in other groups—or sometimes even our own office mates—about what we were doing. In a sense, information was cloistered in a networked cell structure, not unlike that of Al Qaeda. That way, if one cell collapsed—if one of us hightailed it to Bridgewater or J.P. Morgan, or set off on our own—we’d take with us only our own knowledge. The rest of Shaw’s business would carry on unaffected. As you can imagine, this wasn’t terrific for camaraderie.

BOOK: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
13.27Mb size Format: txt, pdf, ePub
ads

Other books

04 Volcano Adventure by Willard Price
Frey by Wright, Melissa
To Have the Doctor's Baby by Teresa Southwick
El rebaño ciego by John Brunner
Distraction by McPherson, Angela
Thug in Me by Karen Williams
Forever Changed by Jambrea Jo Jones