Diversity from the bottom up – beyond technical debiasing

Recently, on a tech forum site, a contributor made the following simple, but insightful statement

AI are the products of algorithms, data and developer. There is only one source of bias in this statement – developer. Algorithms are tools; data is information without intended action; developers use these to create intended action. The most obvious way to remove bias from technology is to improve diversity in the pool of technology creators.

Improving diversity begins at the university student level. AI was once an interdisciplinary field, however in recent times it has narrowed to become a technical discipline – drawing solely from computer science and engineering disciplines. However, as AI systems are increasingly applied to social domains such as education, healthcare, criminal justice, recruitment and housing, it is critical that university AI programs expand their disciplinary orientation beyond computer science and engineering disciplines.
Expertise is required from the social domains in which AI is rapidly being embedded. This requires a transformation of the field of AI, one which sees social science and the humanities as key contributors. This will better ensure that the development of AI is relevant and beneficial to the social context in which it is deployed.
Without the expertise of those trained to research the social world – AI runs the risk of deploying biased products which are hazardous to humanity. Consider the example of Amazon’s sexist recruitment tool. The AI tool had been trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry. Subsequently, the recruitment tool had taught itself that male candidates were preferable. Had the team of machine-learning specialists who built that tool been exposed to interdisciplinary training, studying social science as well as computer science, there is a high chance that the gender bias would’ve been picked up far earlier in the design process, prior to the deployment of the tool.
To avoid bias such as the above, AI development requires a wider and deeper social analysis of how AI is used in context. This necessitates an expansion of AI’s disciplinary orientation at the university level.
Joy Townsend

Written by:

Dr Joy Townsend