What ‘harm’ will AI do?
The Issue
In the last blog, we touched upon the concept of ‘high risk AI’. The term begs the question: high risk of what? The obvious answer is risk of ‘harm’. But what is harm? Who and how will harm be assessed? Over what period should harm be measured?
Discussions of the harms and benefits of new technologies are nothing new. They are likely as old as human tool-making itself. But what is consistent over time, is that new technologies are never neutral or balanced in their impact. Different people and groups benefit and are harmed by new technologies and this is often exacerbated by existing power structures and systemic societal inequalities.
Before exploring concepts of harm, it is worth stating that “to do no harm” is a different starting position than the positive version of this principle which is to ensure benefits. It would seem difficult to justify the development and use of AI based purely on the assumption that it is harmless.
When assessing the risk of AI harm, different actors will view this concept through different lenses.
Commercial harm
From a commercial or organisational perspective, harm is likely to relate to factors such as revenue, costs, reputation and brand damage.
Legal harm
From a legal perspective, harm would relate to material and bodily injury, issues of negligence, product liability, contract and consumer law, discrimination and more.
Ethical harm
From an ethical perspective, harm could relate to issues of autonomy, dignity, consent, privacy and justice. It is also worth mentioning that in assessing the risk of harm from an ethical perspective, different philosophical schools of thought are influential. A utilitarian assessment of risk may view harming one person as acceptable if benefits accrue to many. In this view, there is a strong focus on actions and outcomes. Alternative rule-based ethical frameworks have a focus on intentions and obligations of the actor and do not countenance the idea that humans are a mere means to an end. In this view, there are rules of action and moral obligation which are without condition – they simply must be followed.
Social harm
The social harm from AI is more difficult to assess or measure in a traditional risk framework because social impact may not be felt immediately and can span years, decades or generations. Leaving aside the existential risk scenario for a moment, some of the social harms that one can imagine relate to issues of social and political polarisation, information asymmetries, amplified inequalities, incitement of violence, misinformation and a general mistrust of people and democratic institutions. When assessing the potential social harm that AI technologies may cause, including a temporal dimension is important.
Our View
The European Commission seems keen on drafting new laws around ‘high risk AI’. However, the current formulation of high risk suggests that the potential impact of AI will be assessed based on the industry of its use, taking into account particular circumstances and its definition of harm seems narrowly defined to life and health, damage to property and harm that results from economic loss. Aren’t these harms already covered by existing laws?
A risk assessment is a snapshot in time. It is difficult to conceive of the longer term impacts or unforeseen harms of new technologies. But identifying the right political, economic and social signals to monitor may provide clues to longer term AI impacts. We’ll explore these signals and measures in later blogs.