Ethical AI: Principles versus practice

In recent years numerous companies, governments, NGOs and academic institutions have developed and publicised their AI ethics principles.

Whilst there may be a developing (Western) global consensus on ethical AI, there remains a significant implementation and accountability gap between aspirational principles and practice. At present, the operationalization of ethics principles in AI product development and deployment remains largely unaccounted for (both legally and professionally).

Consequently, there have several examples of corporations acting in direct contradiction of their own ethics principles.

The AI NOW institute’s 2019 report details the following two examples:

“Microsoft’s funding of an Israeli facial-recognition surveillance company called AnyVision that targets Palestinians in the West Bank: AnyVision facilitates surveillance, allowing Israeli authorities to identify Palestinian individuals and track their movements in public space. Given the documented human-rights abuses happening on the West Bank, together with the civil-liberties implications associated with facial recognition in policing contexts, at a minimum, this use case directly contradicts Microsoft’s declared principles of “lawful surveillance” and “non-discrimination,” along with the company’s promise not to “deploy facial recognition technology in scenarios that we believe will put freedoms at risk.” More perplexing still is that AnyVision confirmed to reporters that their technology had been vetted against Microsoft’s ethical commitments. After public outcry, Microsoft acknowledged that there could be a problem, and hired former Attorney General Eric Holder to investigate the alignment between AnyVision’s actions and Microsoft’s ethical principles.
In another of many such examples of corporations openly defying their own ethics principles, and despite declaring as one of its AI principles to “avoid creating or reinforcing unfair bias,” Google set up the Advanced Technology External Advisory Council (ATEAC), an ethics board that included Kay Coles James, the president of the Heritage Foundation and someone known for her transphobic and anti-immigrant views. Workers and the public objected. A petition signed by over 2,500 Google workers argued: “In selecting James, Google is making clear that its version of ‘ethics’ values proximity to power over the wellbeing of trans people, other LGBTQ people, and immigrants. . . . Not only are James’ views counter to Google’s stated values, but they are directly counter to the project of ensuring that the development and application of AI prioritizes justice over profit.” Following the backlash, Google dissolved ATEAC after a little over a week.”

These examples, amongst others, demonstrate that while there continues to be a lack of legal accountability concerning a corporation’s violation of their own ethics principles, the most effective vehicle for change occurs when there is public pressure from workers, journalists, and policymakers to ensure ethical AI. For example, while Facebook publicises its own internal ethics process, the various controversies the company has recently faced demonstrate that public pressure and organized workers appear to be far better at ensuring ethical AI than the company’s principles are. Whilst the articulation of ethical principles is valuable, they cannot guarantee ethical AI.

There is much work to be done, globally, in order to realise the operationalization of ethics principles in AI product design, development and deployment.