‘Automating Inequality’ by Virginia Eubanks shows how technology is stacked against the marginalized
When politicians and business leaders talk about using technology to streamline service provision to the poor, they conjure up an efficient, values-free process. Helping homeless people might become like Airbnb, where shelter and subsidized housing are matched with available spaces. Assistance for the poor might bypass all the cumbersome application forms and waiting times. Risk assessments could identify children at risk of abuse before the abuse happens.
Technology can be a neutral tool. But, as Virginia Eubanks describes in “Automating Inequality,” the computerized tools applied to social service provision are designed with the institutional biases endemic in our society, starting with the idea that poverty is the fault of poor people and that a goal of our welfare systems is to make sure that nobody gets aid who doesn’t deserve it, even if that means denying aid to people who do. Eubanks calls the use of technology to evaluate and track poor people the “digital poorhouse,” consistent with efforts throughout U.S. history to distinguish between the “deserving” and “undeserving” poor.
Eubanks describes three examples of the application of high technology to social services in the United States. The first, Indiana’s experiment with automatic determination of welfare eligibility, can only be characterized as a disaster. A major goal was to reduce the number of people on welfare, and, by design, it reduced individual caseworkers’ contact with welfare clients, making it harder for them to become advocates.
Once in operation, the system automatically kicked people off assistance who made minor errors in their applications; the results were so clearly inhumane — sometimes denying medical benefits to very sick people — that the experiment was partially abandoned, but not before it had helped achieve one of the main goals: “When the governor signed the contract with IBM in 2006, 38 percent of poor families with children were receiving cash benefits from TANF. By 2014, the number had dropped to 8 percent.”
Next, Eubanks examines Los Angeles’ use of a comprehensive database to match homeless people with appropriate housing services. She argues that in a situation where housing needs far exceed supply (sound familiar?), the system became a form of cost-benefit triage. The people who got help were either very needy (because leaving them on the street would cost more in the long run) or else only needed a small financial intervention to get housing. In other words, resources went to the worst-off and the best-off, leaving vast numbers in the middle unhelped.
At the same time, Eubanks points out, people needing services were expected to answer some very personal questions about mental health and medical history; this involved a major invasion of people’s privacy, linking medical records, criminal history and social service reports in a way that they could be accessed inappropriately, including by the police.
The third example is a software program, the Allegany Family Screening Tool, developed for a Pennsylvania county’s Office of Children, Youth and Families (CYF). The tool was designed to rank families in the database according to their likely risk of child neglect and abuse. As Eubanks discovered, the model disproportionately gives high-risk scores to certain families, based on their use of social services, and generates racially biased results. The model doesn’t evaluate middle-class and wealthy families, because they typically don’t access public social services.
As Eubanks points out, the model’s predictive accuracy was only 76 percent, which meant that in one out of four cases its assessment was wrong about whether children in a particular family were at risk. The actual use of the model was fairly benign, in the sense that caseworkers were expected to use their own judgment as to whether to pay attention to the model’s scores — but it also set up a surveillance tool that could easily be abused by a different county administration or in a different political climate.
“Automating Inequality” is engrossing in its descriptions of how technology is used to track, diagnose and stigmatize the poor. Eubanks argues that the use of technology in this way is neither value-free nor for the benefit of poor people. One test Eubanks suggests about databases and models like this is to consider whether they would be tolerated if the information was collected from middle-class people. She suggests a kind of analog to the “Hippocratic Oath” that software developers could take before setting up a model.
The greatest weakness of the book is that Eubanks doesn’t do very well at linking her general arguments about technology — in particular, the dangers of surveillance and invasion of privacy — to the real-life examples, where negative results are mostly the result of inadequate government funding and/or conservative political agendas. She could have done a lot more in her examples to call out where the stories reinforced the points she makes in her concluding chapters. But the book definitely makes the case that automation of social services does little, if anything, to help poor people, while making it easier to deny them help.
Check out the full July 18 - July 24 issue.
Real Change is a non-profit organization advocating for economic, social and racial justice. Since 1994 our award-winning weekly newspaper has provided an immediate employment opportunity for people who are homeless and low income. Learn more about Real Change.