Bad Robots: UK Home Office Backs Away From Immigration Algorithm After Legal Challenge

Bad Robot Outcome:
After being sued by two groups, the United Kingdom’s Home Office has agreed to halt its use of, and substantially redesign, an algorithm that it had been using to analyse and support visa applications.

The legal challenges were able to successfully argue that the technology used by the department was both discriminatory under the Equality Act of 2010 and irrational under common law.

The Story

The Home Office is a ministerial department of the Government of the United Kingdom that is responsible for immigration, security, and law & order. Since 2015 the office had been using a piece of technology known as a “streaming tool” to categorize visa applications according to how much scrutiny each such application was to receive.

The streaming tool would group visa applications into one of three categories – green, yellow, and red. An application in the “red” category would have to be scrutinized much more closely than its green or yellow counterparts and even required additional approval from a more senior officer.

Applications assigned a red rating were far less likely to be approved than those designated as either green or yellow. In fact, green applications had a success rate over 99%, while those flagged as red had less than a 50% chance of approval.

Although the exact criteria used by the algorithm in its assessment of applications is still unknown (more on that below), it has been revealed that one of the factors used in the ranking system was the nationality of the visa applicant. Although the Equality Act of 2010 prohibits discrimination on the basis of nationality, certain nationalities are considered “suspect” by the Equality Act Nationality Risk Assessment based on the existence of a certain number of “adverse events” such as over-staying, working without proper authority to do so, and having applications denied in the past.

That last point (disapproval of past applications) is particularly troubling here. Because the algorithm penalized nationalities with prior disapprovals by categorizing them as red, it thereby created an unfairly vicious cycle in which particular nationalities were perpetually penalized.

In June 2020, Foxglove (a group that seeks to combat technology misuse by governments and private entities), in conjunction with the Joint Council for the Welfare of Immigrants (JCWI – a charitable organization that works on behalf of immigrants), filed a judicial review claim against the UK Home Office in connection with their usage of the streaming tool. They argued that the algorithm was both discriminatory and irrational.

The Fall-Out

In early August 2020, the UK Home Office responded to the legal challenge brought forth by Foxglove and the JCWI. The department, after reviewing the claims, agreed to discontinue its use of the streaming tool.

On the issue of discrimination, it was argued that the ratings assigned to applications by the algorithm were of material importance to the eventual approval/non-approval decisions, even though they were only supposed to inform the level of scrutiny applied. Foxglove and JCWI claimed that a red classification would create confirmation bias, leaving those assigned to review the application less likely to approve. They were also able to point to a 2017 independent report from the Chief Inspector of Borders in which it was stated that the algorithm had become a “de facto decision-making tool.”

The algorithm was also claimed to be irrational because it considered visa application denials to be “adverse events” while then feeding those same adverse events into its assessment of individual applications. Therefore, the streaming tool was classifying certain applications as high risk simply because it had done so in the past.

Our view

First and foremost, we at the Ethical AI Advisory commend the decision of the UK Home Office to discontinue its usage of an algorithm that was fraught with discrimination and had been producing unfair and unwarranted outcomes for individuals. However, the department is hardly without blame.

Even in light of the successful judicial review, the details of the particular algorithm at issue here are still opaque at best. Aside from admitting that it used a secret list of suspect nationalities, the UK Home Office has still not disclosed any of the other factors that were used to categorize visa applications. Transparency is a fundamental aspect of ethical artificial intelligence. When algorithms are used for public functions, such as the assessment of visa and immigration status, individuals should be informed of how such technology is being used.

Furthermore, this particular example of AI gone wrong reminds us of one of our previous Bad Robots blog posts. There, an algorithm that was used to assess the performance of individual students factored in the historical performance of the students’ school in its decision. As such, kids were being assessed using a factor that had nothing to do with their individual performance. Here, a very similar thing was happening. Individual visa applications were being assessed using criteria of the person’s home country, without reference to the individual applicant. This is both arbitrary and unfair.

Joy Townsend

Written by:

Andy Dalton