Bad Robots – China Uses Artificial Intelligence to Target Uighur Muslim Population

Bad Robot Outcome: Despite trying to shield this information from the international community, it is a known fact that the Chinese government has been involved in wide scale human rights violations against the Uighur Muslim population of Xinjiang. In the past year, it has further come to light that various Artificial Intelligence technologies are being used to target and suppress the already marginalized group.

The Story

As I write this article, at least one million Uighur Muslims are being held in what the Chinese government calls “re-education centers.” Despite the not-so-sinister name placed on such “centers,” they are internment camps encapsulated by barbed wire and guarded from watch towers. There are around 11 million Uighurs in Xinjiang – an autonomous region in Northwest China. The area has been under Chinese control since its annexation in 1949. Xinjiang, designated a “special economic zone,” is the country’s largest producer of natural gas.
China claims that Uighur Muslims are a threat to national security, citing two attacks in 2013 and 2014 that were claimed to have been perpetrated by Uighur militants. Since then, the Chinese government has cracked down on the ethnic group with measures such as demolishing mosques and passing laws that outlaw men from growing long beards and women from wearing veils.
It has since surfaced that Artificial Intelligence is being used to further target the Muslim population of Xinjiang. In 2019 various documents and interviews came to light confirming that the Chinese government had been using facial recognition technology, embedded within surveillance cameras, to track and control Uighur Muslims. This was the first known instance of a government intentionally using AI for explicit racial profiling purposes. This disturbing trend is being perpetuated by Chinese law enforcement agencies and fueled by startups within the country that are catering to the demand. Since 2018 almost two dozen police departments across the vast country have sought to use such technologies. CloudWalk, a Chinese software company, markets its own technology as being able to recognize “sensitive groups of people.”
CloudWalk is just one such company in the space. Others include Yitu, Megvii, SenseTime, and HikVision – a company that sold minority recognition software alongside the actual cameras themselves. Disturbingly, these startups have raised serious funds from foreign (including US-based) investors such as Sequoia, Fidelity International, and Qualcomm Ventures. The facial recognition technology used by Chinese law enforcement agencies is underpinned by machine learning. In this particular use case, engineers feed thousands of images – labeled as either “Uighur” or “non-Uighur” to AI systems that then generate functions to distinguish members of the ethnic group from others. As noted above, the practice has become widespread throughout the country. One source claims that a national database stores all images of Uighurs who leave Xinjiang. Police in the city of Sanmenxia used facial recognition to identify residents over 500,000 times in just one month. Most recently, Chinese tech giant Huawei was exposed for testing facial recognition software that would send “Uighur alarms” upon the identification of a member of the ethnic group. Huawei, the second largest maker of smartphones in the world, tested a system that was able to take real-time snapshots of pedestrians caught on surveillance cameras and then replay the footage immediately before and after a “Uighur alarm” was tripped.

The Fall-Out

The implications of China’s usage of this type of technology are vast and terrifying. It has already interned over one million Uighur Muslims and continues to repress the group with this overreaching surveillance. To be clear, China is not alone in having gone too far down this slippery slope. In fact, in the United States, there are many examples of “racism built into [its] algorithmic decision making,” as noted by Jennifer Lynch – surveillance litigation director at the Electronic Frontier Foundation. However, as noted above, China is alone in being the first government to explicitly introduce race-based facial recognition technology into its surveillance capabilities, drawing much concern and criticism from the international community – particularly considering the fact that many of the Chinese startups listed above have plans to expand internationally. As stated by Jonathan Frankle, an AI researcher at the Massachusetts Institute of Technology, “I don’t think it’s overblown to treat this as an existential threat to democracy.” Claire Garvie, an associate at the center on Privacy and Technology at Georgetown Law, further warned of the perils of race-based facial recognition. “If you make a technology that can classify people based on ethnicity, someone will use it to repress that ethnicity.” This is exactly what we’re seeing in China. There have been commercial fall-outs, as well. Despite their size, you won’t find a Huawei smartphone in the United States. In July 2020, the UK banned the company from its 5G infrastructure. Additionally, the US Commerce Department blacklisted eight Chinese companies (including Megvii) due to their contribution to human rights violations against Uighur Muslims.

Our View

We at the Ethical AI Advisory agree with the viewpoints espoused by Claire Garvie and Jonathan Frankle above. To put it quite simply, facial recognition technology that targets certain ethnicities is ethically problematic to the point of being downright dangerous.

It flies directly in the face of the third (of eight) AI Ethics Guidelines – fairness. Any technological system that singles out an ethnic group has the potential to be unfair. One might argue that there are legitimate purposes, but the risks of misuse (as noted by Claire Garvie) far outweigh any potential positive applications of such technology.

We stand in solidarity with the international community that has opposed the use of facial recognition technology to target and oppress Uighur Muslims.

Joy Townsend

Written by:

Andy Dalton