In September 2024 the Commonwealth Department of Industry, Science and Resources published a proposal paper Safe and reliable AI in Australia.
In this document the Australian Government acknowledges the immense potential of Artificial Intelligence (AI) to improve social and economic well-being but recognises that its rapid adoption and associated risks necessitate a modern regulatory framework. Current regulatory systems are inadequate to address AI’s distinct challenges, prompting international efforts to establish risk-based guardrails throughout the AI lifecycle.
In its interim response to consultations on safe and responsible AI, the Australian Government committed to creating a regulatory environment that fosters community trust and AI adoption. This involves implementing measures outlined in the Voluntary AI Safety Standard and exploring mandatory guardrails for high-risk AI settings. These guardrails focus on testing, transparency, and accountability, aligning with international developments.
Key proposals include:
- Defining high-risk AI: Principles to identify and regulate AI in high-risk contexts, including general-purpose AI models.
- Mandatory guardrails: Preventative measures such as rigorous testing, transparency about AI systems, and accountability for risk management.
- Regulatory options: Three approaches to mandate these guardrails:
- Domain-specific: Adapting existing regulatory frameworks.
- Framework legislation: Introducing overarching legal structures with targeted amendments.
- Whole-of-economy: Establishing a comprehensive AI Act.
These proposals need to be seen in the context of other policy initiatives including:
- Policy for the responsible use of AI in Government, version 1.1 of which was published by Australia’s Digital Transformation Agency also in September 2024, and
- the Voluntary AI Safety Standard issued by the National Artificial Intelligence Centre in November 2024.
Leave a Reply