What can go wrong ethically with AI?

Artificial intelligence systems ‘learn’ based on the data they are given.

This, along with many other factors, can lead to biased outcomes. Without careful attention, there is a high risk that AI systems will reflect the status quo or generate blind spots.

The implication of this can mean that some groups of people are discriminated against on the basis of factors such as race, age, gender, ethnicity and ability.

There have been a number of instances of racial bias as a result of AI systems used in the US, reported in the media. For example, a recent study uncovered significant racial bias in decision-making software used by US hospitals. An algorithm widely used in US hospitals to allocate health care to patients has been systematically discriminating against black people. Hospitals and insurers use the algorithm and others like it to help manage care for about 200 million people in the United States each year. The study concluded that the algorithm was less likely to refer black people than white people who were equally sick to programmes that aim to improve care for patients with complex medical needs.

Similarly, in some US states, algorithms and artificial intelligence are used to help decide prison sentences. One such program is COMPAS, designed by Northpointe. Evidence has emerged of several cases where the accuracy of predictions skews on the basis of race.

Black offenders are more likely to be deemed ‘high risk’ than white offenders also applying for parole.

Although race is not one of the metrics COMPAS is coded for, the end result is racially skewed. Black people tend to receive longer punishments than white people for the same offenses.

Gender bias is another common AI prejudice. Amazon’s machine-learning specialists built a program to review job applicants’ resumes with the aim of automating the search for top candidates. The recruitment tool used artificial intelligence to give job applicants scores ranging from one to five stars. Essentially – the employer feeds the machine 100 resumes, and it will spit out the top five, and Amazon then hire those candidates.

Sounds too good to be true?

In 2015 Amazon’s machine-learning specialists uncovered a big problem: their new recruiting tool was not gender-neutral. It did not like women. That is because the tool had been trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry. Subsequently, the recruitment tool had taught itself that male candidates were preferable.

How do you avoid or remedy bias in AI systems?

Building diversity into the design process is key.

Unconscious biases thrive in homogenous thinking spaces.

By including diverse teams in the design process – including diversity of gender, race, ability, class, and culture – designers can reduce the likelihood of biases being embedded into AI systems.

Further reading:

https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

Ellen Broad, Made By Humans: The AI Condition, (Melbourne: Melbourne University Press, 2018)

https://ethics.org.au/ethical-by-design/

West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute. Retrieved from https://ainowinstitute.org/discriminatingsystems.pdf