The professor, Michael Dear, liked my thesis. He encouraged me. But, back in those days, I had difficulty even piecing together a couple of sentences that made sense, leave along a 2,500-word essay. Long story short, I didn't do well! But, hey, I live to tell the tale ;)
One of the efficiency routes that we increasingly encounter is also one that should worry us a lot: Automation. It is one thing when we employ automation in our personal lives. But, in a collective decision-making process, automation can be a boon or a disaster depending on what we want it to do. (Set aside for now the machine learning, which takes these to another level altogether.)
If we go about using automation with an explicit goal to help people, then efficiency helps us. Like in this context:
Virginia Eubanks says policymakers can look to successful models when implementing an automated system. "In Chicago there's a great system called mRelief," she says. "mRelief basically allows you to sort of ping government programs to see if you might be eligible for them. And then the folks who work for mRelief actually help step you through — either in person or through text — the process of getting all the entitlements that you are eligible for and deserve."But, such systems are rare. More common are the kinds that are setup in order to hurt the poor even more!
In Indiana, the governor there signed what eventually became a $1.4 billion contract with a coalition of high-tech companies that included IBM and ACS in an attempt to automate and privatized all of the eligibility processes for the state’s welfare programs. When seen as a question of simple efficiency, I think it makes a lot of sense.Guess what happened? Mistakes can happen in any step along the way, right? "Any fault, any accident, any mistake was [considered] the fault of the applicant rather than the responsibility of the caseworker."
But one of the assumptions that was built into the system was that the relationships between caseworkers—particularly public local case workers and the families that they served—were invitations to collusion and fraud. And that part of making the system more efficient actually lay in breaking the relationship between caseworkers and the families that they develop relationships with and serve. The system was built to replace a casework-based system with a task-based system. So 1,500 public caseworkers were moved into regional call centers far away from their homes.
And rather than carrying a docket of families that they served, they responded to a list of tasks that dropped into a computerized queue. So nobody saw cases through from the beginning to the end. And every time a recipient called the call center, they talked to a new person.
One million applications were denied in the first three years of the program, a 54 percent increase from the three years before that. And these are really horrifying cases—like an African-American woman in Evansville, Indiana, who missed a recertification appointment because she was in the hospital dying of ovarian cancer. She was kicked off her Medicaid because she missed those appointments.
Technological tools that make us more, ahem, efficient, "are not disrupters so much as they’re amplifiers."
It shouldn’t surprise us when a tool grows out of our existing public assistance system to be primarily punitive. Diversion, moral diagnosis, and punishment are often key goals of our public-service programs. But if you start with a different values orientation—if you start from an orientation that says everyone should get all of the resources they’re eligible for, with a minimum of disruption, and without losing their rights—then you can get a different tool.
Which means, what we are really doing is hiding our real agenda behind the rhetoric of "efficiency." I wish I could have articulated that in graduate school!
we actually smuggle all of these political decisions, all of these political controversies, all of these moral assumptions, into those tools. Often they actually act to keep us from engaging the deeper problems
Yep, we are systematically making sure that we won't engage with the deeper problems. We are apparently getting highly efficient in this :(