Bots Are Terrible at Recognizing Black Faces. Let's Keep it That Way.

Image via Smithsonian.com

Color photography was invented for light skin. Facial recognition technologies are just carrying on color film’s history

Daily Beast, February 8, 2019

In a country where crime prevention already associates blackness with inherent criminality, why would we fight to make our faces more legible to a system designed to police us?

I haven’t watched too many dystopia science fiction movies, but I have seen more than one where the protagonists attempt to somehow obscure their identity from the police and their surveillance systems. Why? We cheer for the protagonists’ attempts to conceal their identities and evade, thwart, and undermine a system tracking, recording, and potentially penalizing, their every move. We believe that people deserve some kind of right to privacy, and so we celebrate the heroic journey of the individual fighting to free herself and others.

In those filmscapes, it’s an advantage of sorts to have a face that’s illegible to the facial recognition apparatus. In reality, that’s not how it usually works. The failure of cameras (and even soap dispensers!) to properly identify or respond to black people is attributed to but not actually reflective of technological error or even algorithmic bias. Rather, it points to a deeply entrenched white supremacist worldview of who is classified as fully “human.”

The popular argument for helping improve facial recognition, and particularly its issues with black faces, is that it would mean fewer “wrongful arrests.” The London Metropolitan Police’s trials of automated facial recognition software, for example, yielded false-positive identifications a staggering 98.1 percent of the time. Amazon’s facial recognition tool, Rekognition, which American law enforcement agencies use today and is particularly bad at positively identifying dark skinned women, mistakenly connected 28 sitting members of Congress (including six members of the Congressional Black Caucus) to pictures from mugshots last year. So, the argument goes, we need systems that are able to see and distinguish us, and the positive modification of these systems is indicative of a step towards a more equitable world.

This is a well-meaning consideration, but one based on what may be the misguided hope of being able to reform from within a pipelined preschool-to-prison carceral system in which black people are regularly arrested and detained for “fitting the description.” Black people are functionally interchangeable and indistinguishable and not afforded the privilege of individuality for a reason. The purpose of the world’s largest prison system—the United States— is less about arresting the right people in service of protecting public safety as it is about the creation and maintenance of a giant labor pool for individuals and companies invested in the material structure of the system.

Automated facial recognition software perpetuates the racialized sight (or lack thereof) that plagued conventional cameras as colonialism, per Alexander Weheliye’s definition, helped divide and “discipline humanity into full humans, not-quite-humans, and nonhumans.” Light-skinned Europeans set the template for the “human,” in explicit opposition to the brown-skinned Africans demoted to not-quite humans and nonhumans through missionary conversion attempts and the transatlantic slave trade.

In her book Dark Matters: On the Surveillance of Blackness, Simone Brown describes the racial origins and evolutions of Jeremy Bentham’s panopticon, an institution of social control and surveillance where individuals are watched without ever knowing whether they are being watched or where the watcher is located. Bentham envisioned it while traveling aboard a ship with “18 young Negresses” who’d been enslaved and held “under the hatches.” The contemporary surveillance structure cannot be delinked from the ensnaring and dehumanizing of black people.

Brown also refers to Didier Bigo’s recent coinage of the “banopticon,” a portmanteau of “ban” and “panopticon,” to describe technology-mediated systems of surveillance that assess individuals based on perceptions and designations of risk. Foundational is the idea of citizenship and assignments of “risks” to a nation-state (the “illegal immigrant”) or even a neighborhood (the “dangerous minority”). The euphemistic idea of “public safety” is always racialized, always defined by threats posed to a genteel white Christian “public.” ...
Read full article at Daily Beast

Comments