How do you do ethical AI?
AI systems do not possess an inherent ethical compass with which to understand the consequences of their actions.
Nor are they intrinsically aware of the social context within which they are deployed. AI systems are only the product of the objectives, data and constraints their designers and operators build into them.
Therefore there is enormous potential for AI systems to restrict human rights, undermine humanity and perpetuate existing discrimination and inequality.
In order to build ethical AI, any capacity for ethical and human rights considerations must be represented in the objectives, data and constraints that direct the AI systems decision-making processes.
Independent research organisation the Gradient Institute outlines the following four challenges associated with creating ethical AI:
1. The first challenge in creating ethical AI is to define ethical objectives and constraints as precise, measurable quantities.
2. The next challenge is to create a system that will realise them. Doing so requires careful analysis of data bias, causal relationships and predictive uncertainty.
3. The third challenge is to leverage human reasoning and judgement to provide effective oversight over AI-driven decisions.
4. The fourth challenge relates to accountability: how to ensure regulation keeps up with advances in AI development. A proactive approach to regulation is required, as is one that is flexible enough to respond to rapid advances in AI technology.
There is a world of opportunity for ethical AI systems to do good, but it relies on a multi-disciplinary approach being employed and designers and developers that are up for the challenge.
There is the potential to build ethical AI that works to minimise harm and enhance an individual’s human rights.