Race After Technology: Shining Light on the New Jim Code

How biased are our algorithms?

I spent part of my childhood living with my grandma just off Crenshaw Boulevard in Los Angeles. My school was on the same street as our house, but I still spent many a day trying to coax kids on my block to “play school” with me on my grandma’s huge concrete porch covered with that faux-grass carpet. For the few who would come, I would hand out little slips of paper and write math problems on a small chalkboard until someone would insist that we go play tag or hide-and-seek instead. Needless to say, I didn’t have that many friends! But I still have fond memories of growing up off Crenshaw surrounded by people who took a genuine interest in one another’s well-being and who, to this day, I can feel cheering me on as I continue to play school.

Some of my most vivid memories of growing up also involve the police. Looking out of the backseat window of the car as we passed the playground fence, boys lined up for police pat-downs; or hearing the nonstop rumble of police helicopters overhead, so close that the roof would shake while we all tried to ignore it. Business as usual. Later, as a young mom, anytime I went back to visit I would recall the frustration of trying to keep the kids asleep with the sound and light from the helicopter piercing the window’s thin pane. Like everyone who lives in a heavily policed neighborhood, I grew up with a keen sense of being watched. Family, friends, and neighbors—all of us caught up in a carceral web, in which other people’s safety and freedom are predicated on our containment.

Now, in the age of big data, many of us continue to be monitored and measured, but without the audible rumble of helicopters to which we can point. This doesn’t mean we no longer feel what it’s like to be a problem. We do. This book is my attempt to shine light in the other direction, to decode this subtle but no less hostile form of systemic bias, the New Jim Code.

Engineered Inequality: Are Robots Racist?

WELCOME TO THE FIRST INTERNATIONAL BEAUTY CONTEST JUDGED BY ARTIFICIAL INTELLIGENCE.

So goes the cheery announcement for Beauty AI, an initiative developed by the Australian- and Hong Kong-based organization Youth Laboratories in conjunction with a number of companies who worked together to stage the first ever beauty contest judged by robots. The venture involved a few seemingly straightforward steps:

  1. Contestants download the Beauty AI app.
  2. Contestants make a selfie.
  3. Robot jury examines all the photos.
  4. Robot jury chooses a king and a queen.
  5. News spreads around the world.

As for the rules, participants were not allowed to wear makeup or glasses or to don a beard. Robot judges were programmed to assess contestants on the basis of wrinkles, face symmetry, skin color, gender, age group, ethnicity, and “many other parameters.” Over 6,000 submissions from approximately 100 countries poured in. What could possibly go wrong?

On August 2, 2016, the creators of Beauty AI expressed dismay at the fact that “the robots did not like people with dark skin.” All 44 winners across the various age groups except six were White, and “only one finalist had visibly dark skin.” The contest used what was considered at the time the most advanced machine-learning technology available. Called “deep learning,” the software is trained to code beauty using pre-labeled images, then the images of contestants are judged against the algorithm’s embedded preferences. Beauty, in short, is in the trained eye of the algorithm.

As one report about the contest put it, “[t]he simplest explanation for biased algorithms is that the humans who create them have their own deeply entrenched biases. That means that despite perceptions that algorithms are somehow neutral and uniquely objective, they can often reproduce and amplify existing prejudices.” Columbia University professor Bernard Harcourt remarked: “The idea that you could come up with a culturally neutral, racially neutral conception of beauty is simply mindboggling.” Beauty AI is a reminder, Harcourt notes, that humans are really doing the thinking, even when “we think it’s neutral and scientific.” And it is not just the human programmers’ preference for Whiteness that is encoded, but the combined preferences of all the humans whose data are studied by machines as they learn to judge beauty and, as it turns out, health.

In addition to the skewed racial results, the framing of Beauty AI as a kind of preventative public health initiative raises the stakes considerably. The team of biogerontologists and data scientists working with Beauty AI explained that valuable information about people’s health can be gleaned by “just processing their photos” and that, ultimately, the hope is to “find effective ways to slow down ageing and help people look healthy and beautiful.” Given the overwhelming Whiteness of the winners and the conflation of socially biased notions of beauty and health, darker people are implicitly coded as unhealthy and unfit—assumptions that are at the heart of scientific racism and eugenic ideology and policies.

Deep learning is a subfield of machine learning in which “depth” refers to the layers of abstraction that a computer program makes, learning more “complicated concepts by building them out of simpler ones.” With Beauty AI, deep learning was applied to image recognition; but it is also a method used for speech recognition, natural language processing, video game and board game programs, and even medical diagnosis. Social media filtering is the most common example of deep learning at work, as when Facebook auto-tags your photos with friends’ names or apps that decide which news and advertisements to show you to increase the chances that you’ll click. Within machine learning there is a distinction between “supervised” and “unsupervised” learning. Beauty AI was supervised, because the images used as training data were pre-labeled, whereas unsupervised deep learning uses data with very few labels. Mark Zuckerberg refers to deep learning as “the theory of the mind . . . How do we model—in machines—what human users are interested in and are going to do?” But the question for us is, is there only one theory of the mind, and whose mind is it modeled on?

Book cover for "Race After Technology" by Ruha Benjamin

Ruha Benjamin, Member (2016–17) in the School of Social Science, is Associate Professor in the Department of African American Studies at Princeton University where she studies the social dimensions of science, technology, and medicine, race and citizenship, knowledge and power. While at IAS in 2016–17, she gave the After Hours Conversation “Are Robots Racist?” which Benjamin describes in her book as “a ten-minute provocation [that] turned into a two-year project.” This article is an excerpt from Benjamin’s resulting book Race After Technology (Polity, 2019). https://bit.ly/2Nxppjl