What are the AI ethical frameworks?

Good technological design requires an ethical framework within which the technology can be designed, developed and deployed

Every Ethical Framework includes three elements: Purpose, values, principles. The Ethics Centre explains the value of an ethical framework with regards to technology: “An ethical framework allows us to pursue excellence.

By basing our thoughts, decisions and actions in a clear statement of why we’re here, what we stand for and where we draw a line in the sand, we go far beyond a ‘do no harm’ approach to ethics. Instead, we’re able to imagine the best version of something – in this case, the best kind of technology… It doesn’t just outline the minimum standard, it also explains the ideal we should be striving for”.

As part of the Australian Government’s commitment to build Australia’s AI capabilities, the Department of Industry, Science, Energy and Resources is currently developing an AI Ethics Framework to guide businesses and governments looking to design, develop and implement AI in Australia.

In its current state, the framework consists of the following eight voluntary principles:

Human, social and environmental wellbeing: Throughout their lifecycle, AI systems should benefit individuals, society and the environment.

Human-centred values: Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals.

Fairness: Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.

Privacy protection and security: Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.

Reliability and safety: Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose.

Transparency and explainability: There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them.

Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or output of the AI system.

Accountability: Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

These principles are aspirational and intended to complement–not substitute–existing AI related regulations. Applying them when designing, developing, integrating or using AI systems will help to:


  • achieve better outcomes
  • reduce the risk of negative impact
  • practice the highest standards of ethical business and good governance