There’s fairness and there’s procedural fairness

“I think it’s important that people remember that they have legal rights to question decisions that get made about them.” [1] Deana Amato, plaintiff in the Robodebt legal challenge. She challenged the debt she received from 2011-12 when she was receiving Austudy.

The Issue

Procedural fairness is concerned with the procedures used by a decision maker, rather than the actual outcome reached. In Australian law, there are two primary rules to procedural fairness:

1. The ‘hearing rule’ – people who will be affected by a proposed decision must be given an opportunity to express their views to the decision maker.

2. The ‘bias rule’ – the decision maker must be impartial and must have no personal stake in the matter to be decided.

One can see that if the decision maker is an AI or a human decision maker has used an automated decision system to inform their decision, there may be a legal onus on them to ensure those affected by the decision can meaningfully interact and respond to the decision-maker. Procedural fairness would require the decision-maker to explain to the affected party what data, assumptions and the means by which the decision was made. This goes to the heart of the principles contained in many ethical frameworks of AI – explainability and transparency.
The increasing use of automated decision systems and AI raises the issue of whether decision makers (AI or human) can sufficiently and understandably communicate the decision making process to any affected party. This issue is further exacerbated if the AI system is using proprietary technology bound by trade secret restrictions. If the hearing rule in procedural fairness is to be met in AI systems, then having a human in the loop with sufficient understanding of the system will be necessary. The bias rule within procedural fairness consists of the commonly understood concept of conflict of interest but it also requires that the decision maker be impartial and free of actual or apparent bias. [2]

Procedural fairness has been codified in some pieces of Australian legislation and restricted in others.[3] While the application of procedural fairness rules apply primarily to administrative and government decisions, it is unquestionably a cornerstone to our common understanding of justice.

Chief Justice French said in 2010: “I do not think it too bold to say that the notion of procedural fairness would be widely regarded within the Australian community as indispensable to justice. If the notion of a ‘fair go’ means anything in this context, it must mean that before a decision is made affecting a person’s interests, they should have a right to be heard by an impartial decision-maker.”

Our View

Delivering fair outcomes should always be the goal of any AI. However, we recognise that trade-offs are inevitable when balancing competing priorities. Therefore, procedural fairness, which can be unpacked to include explainability, transparency and having knowledgeable humans in the decision making process, will lead to better and fairer outcomes. Even if it doesn’t lead to fairer outcomes, it will ensure that unfair outcomes are identified and rectified more quickly.