Bad Robots – Twitter Faces Backlash Over Racially Problematic Algorithm

Bad Robot Outcome:
Twitter is re-evaluating an image cropping algorithm after evidence has emerged that the technology seemingly favored images of white individuals while hiding those of people of color. Executives from across the company have come out to assure the public that the social media giant is taking this situation seriously and is committed to rectifying the problem.

The Story

When a user posts an image to Twitter, an algorithm is used to crop a smaller “preview” version that is displayed before clicking through to see the image in its entirety. The company had previously used a facial recognition software that it used for image cropping. However, this technology was scrapped due to performance issues.

Presently, Twitter employs a technology that focuses on “saliency” (the area within the overall image that is most interesting to viewers). This tends to be things like people, animals, and text. It is here where the problems begin.

 

Colin Madland, a PhD student, had been on a Zoom call with a Black colleague when he noticed that the video conferencing software failed to recognize the colleague’s face. He then attempted to upload a picture to Twitter and noticed that the image had been cropped to display only Madland.

After Madland’s posts, others took to Twitter for “experiments” of their own. For example, entrepreneur Tony Arcieri discovered that the algorithm consistently crop an image of US Senator Mitch McConnell and former President Barack Obama, omitting Obama every time. Similar results were found with other individuals, cartoon characters … and even dogs.

The Fall-out

Within days, several top executives at Twitter came forward publicly to address the troubling image cropping algorithm. Parag Agrawal, the company’s Chief Technology Officer, commended the “public, open, and rigorous test” and noted that he was “eager to learn” from the experiences.

 

Liz Kelley, a member of the company’s communications team tweeted that testing of the algorithm had produced no evidence of bias, but that further analysis is needed. She went on to state that the company would “open source [its] analysis so others can review and replicate.”

Our view

While the cropping of images may not seem to be as serious as the other issues we’ve highlighted in this Bad Robots series, it is still troubling and historically problematic. Black faces, voices, and experiences have – for far too long – been drowned out in favor of their white counterparts.

We’ve seen the manifestations and consequences of these disturbing practices repeated in countless ways. Whether it’s Black children yearning for people who look like them in movies and other media, or white faces prioritized when searching for “professional” hairstyles – the list goes on. Such practices reinforce the dangerous presumption that whiteness is the norm and Blackness is “other.”

We do see some genuinely hopeful elements of Twitter’s response here, though. Particularly with respect to the offer to open up its analysis of this situation. As we’ve discussed in prior posts, so much of the trouble that comes with AI comes about when such technology is a “black box” that cannot be explained or properly analyzed. Here, by opening up such analysis, individuals can hopefully discover the root of the problem. From there, not only can Twitter rectify the issue, but others can learn from – and avoid – a similar outcome in the future.

We would also recommend that these types of algorithms are tested not only from a usability and functionality perspective, but also from a diversity & inclusion one, in the future. If Twitter’s testing had included this angle, perhaps they would not be where they are today.

Joy Townsend

Written by:

Andy Dalton