Fairness – universally understood but hotly contested

If you are a parent in Australia and put bowls of ice cream in front of two siblings, the first thing they do is examine the quantity of ice cream in the other’s bowl. If they are not happy with the distribution, you’ll quickly hear…
“Hey, she got more than me. That is not fair!”

The Issue

Fairness is a concept that is well understood by kids at an early age. In fact it is not only understood it is innate. That is, there is strong evidence[1] that humans have evolved a sense of fairness like we evolved the language instinct. We are each born with a design framework or set of parameters within which different languages[2]or different moral norms can be learned depending on culture, environment and context. So an innate sense of fairness does not imply a universal set of fairness values or norms. Rather, when examining fairness from an evolutionary perspective, variations in ‘what is fair’ in different times and places is expected. But here is the rub. If it is true that fairness differs depending on the context, then how is an AI developer to determine which parameters of fairness to utilise or what trade-offs to make?

While different people, communities, organisations and even countries might have slightly different perspectives on fairness, there are already well established legal frameworks which set the minimum bar for fairness in Australia.

Concepts such as human rights, justice, bias, discrimination, procedural fairness, inclusion, impartiality and transparency are addressed in a variety of legal frameworks. We’ll explore some of these concepts and legal frameworks in later blogs. For now I would suggest that in developing ‘fair AI’ three minimum thresholds should be met:
1. Does it comply with our international human rights obligations?
2. Does it comply with our existing commonwealth and State/Territory laws (particularly anti-discrimination)?
3. Does the AI pass the ‘pub test’ for fairness?

In Australia the ‘pub test’ is a metaphorical gauge of how the general public views a particular issue. If something doesn’t pass the ‘pub test’ the patrons in the local pub would find the issue unpalatable. There is nothing particularly scientific about it and journalists often use the phrase when critiquing political policy. But if it is true that we all possess an innate moral sense of fairness then the ‘pub test’ has some validity.
(If you don’t think your boss will sign off on you going to actual pubs to test your AI product than just rephrase it as ‘user centred design’).

Our View

Fairness is contextual but that doesn’t mean there isn’t a set of universally acceptable standards of fairness which we all understand and to try to adhere to.  The moral sense of fairness emerges out of the natural conflict we have been self-interest and community interest. Feelings of guilt, disgust, shame, embarrassment and empathy can be strong motivators to behave fairly, but AI’s are yet to be blessed with such attributes. We therefore need ‘bake in’ to our AIs the community understanding of fairness and then to appropriately test it within the community where it will be used.  This would also mean that if we are importing an AI from outside our community, we would need to ensure it meets our standards of fairness.

[1] See Hauser, M.D (2006) Moral Minds: how natured designed our universal sense of right and wrong. Harper Collins. New York [2] See Chomsky, 1957,1967; Pinker 1994