A Method for Ethical AI in Defence – new report and toolkit
The Australian Government’s Department of Defence recently published a technical report and supporting materials for project managers and teams involved in the development and use of AI in defence. This is a welcome addition to the Australian government’s AI Ethics Framework which specifically carved out defence and military uses of AI as needing separate examination.
“Defence’s challenge is that failure to adopt the emerging technologies in a timely manner may result in a military disadvantage, while premature adoption without sufficient research and analysis may result in inadvertent harms.”
The report summarises the results of a workshop that was held in 2019 and is only a small piece of the ongoing work that the Department of Defence is doing in relation to AI and autonomous systems in the military. The report makes it quite clear the findings of the workshop and the material contained in the report do not represent the views of the Australian Government.
That said, the principles or facets of ethical AI in Defence align closely to all of the AI ethics frameworks and principles that have emerged over the last few years.
The five facets of ethical AI in Defence include:
1. Responsibility – Who is responsible for AI?
2. Governance – How is AI controlled?
3. Trust – How can AI be trusted?
4. Law – How can AI be used lawfully?
5. Traceability – How are the actions of AI recorded?
The paper also provides a checklist and a risk framework for those engaged with AI can practically utilise in the pursuit of ethical AI in defence. The main components of the ethical AI in defence checklist are:
- Describe the military context in which the AI will be employed;
- Explain the types of decisions supported by the AI;
- Explain how the AI integrates with human operators to ensure effectiveness and ethical decision making in the anticipated context of use and countermeasures to protect against potential misuse;
- Explain framework/s to be used;
- Employ subject matter experts to guide AI development;
- Employ appropriate verification and validation techniques to reduce risk.
Additionally, they have developed a model procurement plan – the Legal and Ethical Assurance Program Plan (LEAPP) – which describes a contractor’s plan for assuring that any AI software that is procured, meets the Commonwealth’s requirements.
One area of distinction between civilian and military frameworks of ethical AI relates to fairness or justice. In a civilian context fairness often relates to reducing bias and discrimination of AI applications. In the military context, the idea of justice or just war has a long history. The so called just war theory postulates that war, while terrible, is not always the worst option. Important responsibilities, undesirable outcomes, or preventable atrocities may justify war.
The purpose of the just war doctrine is to ensure war is morally justifiable through a series of criteria. The criteria can be split into three groups:
- The right to go to war (jus ad bellum);
- The right conduct in war (jus in bello);
- The morality of post-war settlement and construction (just post bellum).
The criterial includes such things as proportionality, last resort, probability of success, fair treatment of prisoners of war etc.
Our View
It is clear that AI and autonomous systems have an important role to play in military contexts and the history of war would suggest that those that can best harness the latest technologies will lead the world in military capabilities. Of course, there are many calling for a ban on lethal autonomous weapons (aka “killer robots”) and in 2019 the U.N. Secretary-General António Guterres wrote on Twitter, “Autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”
According to Human Rights Watch, some thirty countries support a ban on lethal autonomous weapons. Countries such as Australia and the United States currently do not support a ban citing it would be premature to ban such weapons systems as there are potential military and humanitarian benefits to these systems. It is often argued that AI systems could make fewer mistakes than humans do in battle, leading to reduced casualties or skirmishes caused by target misidentification. This type of argument is also used in the development and use of fully autonomous vehicles. The argument is that if it is true that fully autonomous cars are safer and will reduce vehicle fatalities, it is a moral imperative to ensure their widespread adoption and use.