Bad Robots: Secretive Facial Recognition Software Company Challenged in Court by Civil Liberties Watchdog

Bad Robot Outcome:
The American Civil Liberties Union (ACLU) has filed a lawsuit in the state of Illinois to force the removal of biometric data collected, without consent, from state residents in what it claims is a violation of state law.

The Story

You may not have heard of a “tiny” company called Clearview AI, but your face is likely somewhere in their database of nearly three billion images that have been scraped from millions of websites.

Clearview was founded by Hoan Ton-That, a native Aussie, who moved to the states in 2007. After two less successful ventures (including one that allowed iPhone users to add Trump’s iconic hair to pictures of their friends), he became deeply interested in artificial intelligence and, in particular, facial recognition technology.

In 2016 Ton-That met David Schwartz (formerly an aide to NYC mayor Rudy Guiliani, as well as editor of The New York Daily News). The two decided to go into business together, leveraging Schwartz’s impressive network in conjunction with Ton-Thot’s technical capabilities.

Clearview started by recruiting a small engineering team. One individual created a program that scraped the internet for images of peoples’ faces. This was often done in direct violation of the websites from which the images were being collected. Another member of the Clearview team worked on improving a facial recognition algorithm that had been sourced from various academic papers.

The result, as described by Ton-That, was a “state-of-the-art” neural net that converted all of the images into mathematical formulas, or vectors, based on the facial geometry (e.g., how far apart a person’s eyes are) of each such image. All photos with similar images were then clustered into “neighborhoods” within Clearview’s vast directory.

By late 2017, the technology described above had been optimized into an impressive facial recognition tool that was called “Smartcheckr.” After some careful deliberation with respect to who would want to purchase this type of software, Clearview set its sights on law enforcement agencies as key adopters.

The Indiana State Police Department was Clearview’s first paying customer. In this instance, a bystander had recorded a shooting that had happened in a public park. The Indiana authorities were able to run a still image (taken from the video) through the vast Clearview database. Despite the suspect not being in any governmental databases, Smartcheckr was able to determine their identity within 20 minutes using an image scraped from social media.

The company has since expanded its operations beyond just law enforcement. A leaked list of its clients identified private entities, such as Equinox and Walmart, and individuals such as Ashton Kutcher. The company claims a 75% accuracy rate, but experts note that this is hard to verify given that there has been no independent testing to determine the rate of false positives.

The Fall-Out

Despite being praised by its customers in law enforcement, Clearview has plenty of critics. Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University, warns that “the weaponization possibilities of this technology are endless. Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using them to dig up secrets about people to blackmail them or throw them in jail.”

Al Gidari, a privacy professor at Stanford Law School, described Clearview’s practices as “creepy” while also acknowledging that “there will be many more of these companies. There is no monopoly on math. Absent a very strong federal privacy law, we’re all screwed.”

As noted by Gidari, there is currently no United States legislation at the federal level that prohibits this sort of technology. However, in May 2020 the ACLU brought suit against Clearview in the state of Illinois claiming that the company’s practices violated the state’s Biometric Information Privacy Act (BIPA).

BIPA forbids the nonconsensual capture of unique identifiers (e.g., “faceprints”) of Illinois citizens. The law was enacted in 2008 after Pay Touch, a company that provided fingerprint scanners to major retailers, went bankrupt and opened up the sale of its vast biometric library. However, Illinois stands alone with respect to the privacy of its citizens. No other US state has such a law in place.

The ACLU’s legal challenge is joined by organizations that support survivors of domestic and sexual abuse, undocumented immigrants, and other vulnerable groups. Linda Xóchitl Tortolero, president & CEO of Mujeres Latinas en Acción (an organization dedicated to empowering Latinas), laments that Clearview’s technology “gives free rein to stalkers and abusive ex-partners, predatory companies, and ICE agents to track and target us.”

The case has not yet gone to trial.

Our view

This “Bad Robot” example is particularly interesting in that, unlike with some of our past blog posts, it seems to be a rather straightforward violation of existing privacy law (as opposed to operating within a legal grey area). As such, there are two important points that we wanted to raise as advocates of a future in which the ethics of artificial intelligence are at the forefront of both technological innovation and sensible policymaking.

First and foremost, we are in favor of privacy laws such as Illinois’ BIPA. This particular rule requires that any company wanting to collect biometric information (fingerprint, face scan, etc.) from a state resident must: (1) notify such individual; and (2) obtain their written consent. We at the Ethical AI Advisory think this is a sensible approach, given the particularly sensitive nature of biometric information. To echo the sentiments of an ACLU staff attorney, the collection of peoples’ biometric data “gives companies, governments, and individuals the unprecedented power to spy on us wherever we go.” As such, individuals deserve the right to safeguard such data and consent (or not) to its use and disclosure.

Furthermore, in line with the comment from Professor Gidari quoted above, we encourage countries to implement such legislation federally (instead of just at the state or city levels). While certain municipalities in the US (like San Francisco) have banned the use of facial recognition, Illinois’ BIPA stands alone with respect to the robustness of its biometric privacy protections.

This creates a rather hodge-podge fabric of different privacy protections that a company operating on a national (let alone, international) scale needs to uphold. For scrupulous companies, this makes the cost of doing business higher. For – shall we say – those entities who are less scrupulous, the lack of a federal standard means that individual states, or even cities, become responsible for the enforcement of such rules.

We at the Ethical AI Advisory support the adoption – both at the private and public level – of the Australian Minister for Industry, Innovation, and Science’s AI Ethics Guidelines. A key tenet of these guidelines is “privacy protection and security.” The implementation of national privacy standards with respect to artificial intelligence, both in the United States an Australia (and beyond), will go a long way toward meaningfully implementing this foundational principle.

Joy Townsend

Written by:

Andy Dalton