Reflections on the transformative potential of refusal as method in the design of technology.
Last year at NeurIPS, one of the most well-respected computer science conferences in the world, the opening panel discussion on AI for Social Good didn’t go quite as people might have expected.
“I’m not usually in spaces like this, and I’m not entirely convinced that I haven’t surreptitiously walked into a terrorist den” began Sarah Hamid, a community organizer based in Los Angeles and one of the core members of the Carceral Tech Resistance Network. As Sarah explained, “… like terrorists, technologists in spaces like this have a concept of what social good is… Everybody thinks that they’re doing good things in the world… that’s the scary thing. I don’t know how it is that we have a disconnect between kids in cages and the work that’s happening in spaces like this.”
There was a smattering of applause as others in the room shifted uncomfortably in their seats. Most of the morning had been devoted to presentations on how computer science might help tackle some of the world’s toughest problems, ranging from online content curation to supporting the UN’s Sustainable Development Goals. Throughout those talks, the line between problem and solution had been clear and uncomplicated. Now Sarah was pushing the audience to think more deeply about the ways their efforts might contribute to the very problems they claim to be solving with technology.
The moderator asked Sarah how she thinks computer scientists should handle competing definitions of social good, citing the various mathematical formalisms that have been developed to grapple with ideas like “fairness” in AI. Again, Sarah responded with an answer that few people might have anticipated.
“I think that’s a very interesting question,” she said, “I just don’t think it’s an important question.” Sarah went on to describe a multi-year campaign she participated in to dismantle Operation LASER, a predictive policing program that the LAPD used to deploy officers to “hot spots” for violent crime and gang-related activity. Sarah described the painstaking efforts her community undertook to learn about and fight against Operation LASER, “Meanwhile seven people have died in LASER zones,” she explained, “…while we’re trying to understand how to defend our community against these data-driven programs…people are dying in these LASER zones.”
Sarah used this example to illustrate how abstract debates about social good were out of touch with the very real harms that communities face in their daily lives as a result of data-driven technologies. If computer scientists really hoped to make a positive impact on the world, then they would need to start asking better questions.
Interactions like this can be deeply unsettling, especially for people who strive to use their time and talents to build technology for social good. But over the last few years, conversations like this have become foundational to my work. Sarah pushed her interlocutors to connect the dots and be accountable for the ways that they perpetuate violence through the study and design of technology. Tech-enabled violence comes in the form of police tools like Operation LASER and PredPol, but it also shows up in the way life chances are distributed through other racialized and gendered systems of meaning and control — what Dean Spade terms “administrative violence.”
On its face, administrative violence appears banal, neutral, and objective…even benevolent. Who wouldn’t want to help government officials identify children at risk of abuse, or support the placement of people into public housing? But as Virginia Eubanks, Ruha Benjamin and numerous others have documented, these technologies often function more like behavior modification programs which prioritize coercion and compliance over the provision of care.
As Dorothy Roberts (2018) argues, the problem is not that these technologies produce wrong assessments — it’s that they are used to support a fundamentally wrong approach to addressing community needs. Tuck and Yang (2014) call this approach to research inquiry as invasion, or “the proliferation of damage-centered narratives, rescue research and pain tourism” that erases systemic violence through an adept “arrangement of justifications and unhistories” that make our present social condition seem natural, inevitable and immutable.
Unfortunately, these moral hazards are not captured in the formal fairness criteria that computer scientists develop to illustrate the trade-offs of different algorithmic design choices. While these debates give the impression of rigor, Seeta Gangadharan warns that abstract problem solving effectively disappears people and history into mathematical equations, which amounts to its own kind of violence.
Similarly, Sarah was warning her audience against investing too much time and energy in abstract debates on algorithmic fairness, because doing so diverts away valuable resources and brain power from addressing more urgent needs. She refused to engage with the default assumptions and ideas that so often frame conversations regarding “AI for Social Good.” She declined to answer the questions she was posed and instead redefined the questions that needed answering. Put simply, Sarah engaged in an act of refusal.
Refusal is an essential practice for anyone who hopes to design sociotechnical systems in the service of justice and the public good. As I’ve argued elsewhere, data scientists often lack the conceptual tools necessary to interrogate, resist, and re-imagine the power relationships which shape their work. As a result, data science reproduces what Donna Haraway refers to as a “conquering gaze from nowhere.” The concept of refusal could offer a transformative framework for re-imagining the work of data science as a liberatory practice.
To refuse is to say no — to turn down requests and opportunities to build technologies that are likely to produce harm. But refusal is more than just an exit strategy. It’s an opportunity to re-imagine the default categories, assumptions, and problem formulations which so often circumscribe the work of data science. Refusal is a beginning that starts with an end.
I first encountered this concept in my own work as someone on the receiving end of refusal. In 2017, I was part of an interdisciplinary team of researchers from MIT and Harvard who were interested in the ways that data were being used to promote bail reform. During the early stages of our project, our team reached out to a local bail fund to learn more about their work. During the meeting, we threw out a number of ideas on ways we might use data collected by the bail fund to better understand different pretrial outcomes. The organizers were rather stoic about the ideas we pitched.
Toward the end of the meeting, Atara Rich-Shea, the executive director of the Massachusetts Bail Fund, leaned in and shared her frank perspective. She said the bail fund was frequently contacted by academics who were only interested in asking their own questions, and that for the most part, those questions were harmful to the people that she served. She went on to explain the ways that academics undermine the work of movements for liberation by asking questions that either siphoned people off into categories of “deserving and undeserving,” erased the violence of incarceration, or distracted from more pressing issues.
Atara’s refusal was a generative and strategic act, one which opened up space for us to renegotiate the assumptions and key vocabularies underlying our work. This approach is resonant with the way Indigenous scholars have talked about the transformative potential of refusal in other fields such as anthropology. As Sarah Wright (2018) argues, refusal is “a way of reframing debate, refocusing the terms of engagement, and re-centering it in productive ways.”
So what exactly is re-centered when we refuse? In the case of bail reform, we came into the conversation thinking that our task was to help key decision makers (judges, prosecutors, etc.) distinguish “signal from noise” when making time-sensitive decisions about potentially dangerous individuals. Atara pushed us to reframe the problem in terms of a runaway courtroom culture that has enabled pretrial detention rates to skyrocket in spite of the rare and declining incidence of violent crime. Rather than focusing exclusively on the behavior of people awaiting trial, we should try to understand why American judges send so many people to jail, in spite of state and federal laws protecting against excessive pretrial detention.
As Alex Zahara (2016) explains, refusal is often “intended to redirect academic analysis away from harmful pain-based narratives that obscure slow violence and towards the structures, institutions, and practices that engender those narratives.” Over the course of the next several months, our team developed a new set of research questions that were based on this critical reframing. But it was not immediately apparent to us what the most effective approach might be for accessing data to support our work. Through a number of conversations with local and state officials, our team quickly realized that we would also need to hone our skills of refusal in order to actively shape the course of our research agenda away from harmful modes of knowledge production.
For example, we spent several months negotiating with a state government to gain access to data that we could use to understand how judges were responding to reforms aimed at reducing pretrial jail populations. During these conversations, government officials expressed interest in working with us to understand the impact of supervised conditions of release, such as electronic monitoring and mandatory drug testing. Although this was beyond the scope of our original research question, we felt it was necessary to explore the topic as a stepping stone to acquiring the other data we needed.
After several weeks of careful consideration, we decided not to proceed with the study on supervised conditions of release, because doing so would have likely resulted in the legitimizing of harmful practices such as electronic monitoring. The limited selection of outcome variables provided to us would have cast practices such as pretrial detention as effective interventions, while simultaneously erasing the well-established harms of incarceration. That kind of erasure amounted to a violence we were not willing to participate in.
Honestly, we were pretty bummed to arrive at the conclusion that we couldn’t move forward with the supervision study. It felt like we were missing out on an opportunity to engage in an important policy conversation and we had some anxiety about turning down this request while we were still negotiating access to the data we needed for the judge study. But we eventually came to view this refusal as a generative act — it gave us the opportunity to have conversations with key decision makers from the courts about the limits and opportunities of the data that they had collected and to imagine alternative research questions.
Let’s be real though: these conversations don’t just magically change the way people in positions of power and authority think. Over the last few years, my attempts to engage in generative modes of refusal have been met with mixed results. Emails go unanswered. Polite nods and pregnant silences fill the room. Doors have been closed. It’s easy to slip into the mindset of “if I don’t do this bad study, someone else will do it even worse than me.”
But these experiences have created space for other, more transformative relationships to take root. Our community collaborations have only deepened as we’ve begun to actively participate in the struggle for shared power rather than trying to direct initiatives or evaluate the issues from the outside. Our relationships with directly impacted communities are based on the trust and accountability that emerges from the process of navigating refusal. Refusal allows the computer scientist to reposition themselves as actively producing the conditions of inquiry — it breaks down carceral, violent modes of knowledge production and imagines a new, reparative role for itself.
Yet, refusal is not something that is rewarded within the academy. There are virtually no venues for scholars to share the essential insights gleaned from the process of deciding not to study or build a technology. The academy’s unrelenting appetite for “original research” means that scholars are constantly hunting for new objects of study, and the easiest targets are always poor, racialized communities of captive “Others.”
Moreover, refusal doesn’t feel good, not to give nor to receive (at least at first, it gets easier with time). If we’re going to get serious about refusal as an essential design practice, we’ll need to attend to the affective dimensions of engineering, to the ways that hegemonic desires for productivity, scale, and impact shape our motivations to do certain kinds of work and ask certain kinds of questions. I am constantly re-learning lessons I’ve already been taught, un-doing desires that I’ve already undone. This work is iterative and never-ending, and it starts with refusal. As Bergman and Montgomery argue, “Undoing Empire means undoing oneself. This is never a purely negative undoing, because it also means becoming capable of something new.”
I’ll say it again: Refusal is a beginning that starts with an end.
In spite of the challenges outlined above, progress is underway. I’m encouraged by projects like the Feminist Data Manifest-No, which explicitly outlines ways scholars can refuse harmful data regimes, while affirming commitments to more radical and transformative data futures. I’m inspired by the work of people like Seeta Gangadharan, Ruha Benjamin, and Jonathan Zong and Nate Matias, who are all actively exploring ways that the concept of refusal fits into the process of technology design. In the classroom, Erhardt Graeff has been charting out ways we can teach students about the “responsibility to not design.” And groups like the Coalition for Critical Technology are committed to developing collective responses to refuse harmful modes of research and technology development within the academy.
As Tuck and Yang (2014) argue, “Rather than chasing aims of objectivity, we encourage researchers to take up a stance of objection, one that will interrogate power and privilege, and trace the legacies and enactments of settler colonialism in everyday life.” It’s time for the fields of engineering and technology studies to take up this call for objection. It’s time for us to refuse the default modes of engagement which are handed down to us by the gatekeepers of data. It’s time for us to embrace refusal as a first step toward asking better questions.
Do you need Digital Marketing or Digital Transformation help in your company? Know how we can help you or your company succeed in the Digital Era. Access our website, know more about our services, and get in touch! We are an amazing Digital Marketing Agency, with high technical skills to make you thrive! JSA Digital
Content Originally Published by Google.