Bad Robots – The Racial Discrimination Embedded in Facial Recognition Technology

Bad Robot Outcome:

The use of AI-driven facial recognition software as the sole reason for making arrests has come under scrutiny over what many perceive as the embedded systematic racial bias on the algorithms after several incidences of wrongful identification and arrests.

The Story

In January of 2020, Robert Williams was arrested in front of his family by Detroit police and spent more than 30 hours in police custody. The reason for his arrest was a face match by a facial recognition software that matched his face to that obtained from a store’s security feed during a robbery. Joy Boulamwini, a known researcher on algorithmic bias, could not complete an assignment because the robot she was supposed to interact with as part of her assignment could not identify her until she wore a white mask. In 2020, a student from Brown University was identified by another facial recognition system as among the wanted suspects following the bombings in Sri Lanka.

All the above cases represent the several instances in which the AI-driven facial recognition technology has failed. The peculiarity with these failures is that they involve minority races, mostly African Americans and people of middle eastern descent. These failures are a pointer to a greater problem within the facial recognition systems, mostly trained using Caucasian faces.

The veracity of the racism embedded in these systems was evident when a study conducted by MIT found that the rate for misidentification for dark-skinned women was 34.7%, which is 49 times higher than Caucasian males! Such an astronomical error rate is unprecedented, especially when the same technology is already deployed in the real world and is actively used to match people’s faces to the crimes database.

The Fall-out

The use of facial recognition is not all itself bad, but the implications on minority groups are of major concern. In particular, the case of Robert Williams raises concerns about what the consequences could have been had the events turned out differently given the biased nature of the law enforcement towards a minority. The wrongful match points to an inherent danger lurking within these systems.

First and foremost, most people point towards the racial bias excised by developers of such systems. Mostly, these systems use facial features more prominent in one race and not the other as the basis of the recognition algorithms, not to add white faces as the basis of training and continuous advancement of the system. With such racialized basis of the system, systemic discrimination is born, and the outcome of such is incidents such as that of Williams. Further, the lack of comprehensive laws to regulate the implementation, use, and deployment of such systems only exacerbates racial discrimination.

Our view

The use of facial recognition to identify criminals is not in itself a bad development. However, ensuring equality at the development stages to eradicate racial algorithm bias must be ensured. Fielding such systems that cannot achieve 100% accuracy is extremely unethical especially in the testing stage which can lead to catastrophic consequences. Such a system means that every single day, we are one person away from a wrongful arrest, shooting or conviction, a gamble that we cannot afford as a society.

Sarah Klain

Written by:

Sarah Klain