INSIGHT

Preparing for voluntary standards and mandatory legislation: a deep dive into Australia's evolving AI regulatory landscape

By Valeska Bloch, Aveline Orban
AI Cyber Data & Privacy Risk & Compliance Technology, Media & Telecommunications

Moving towards responsible AI 12 min read

Over the past few weeks, the Australian Government has published a series of standards, proposals and policies which foreshadow the principles likely to be adopted in mandatory AI legislation once introduced.

On 5 September, the Australian Government published:

  • a Voluntary AI Safety Standard (the Voluntary Standard), which includes 10 voluntary (and detailed!) guardrails for how Australian organisations should safely and responsibly use and innovate with AI;
  • a Proposals paper for introducing mandatory guardrails for AI in high-risk settings (the Proposals Paper)—submissions are due on 4 October 2024; and
  • the Australian Responsible AI Index 2024 (in conjunction with Fifth Quadrant), which looks at how Australian organisations are adopting and implementing responsible AI practices. Interestingly, the index identifies that, 'while 78% of organisations believe their AI systems align with Australia’s AI Ethics Principles, only 29% have implemented the necessary practices to achieve that ambition'.

In keeping with the Government's promise to position itself as an exemplar under its broader safe and responsible AI agenda, it has also now published:

The Government will also shortly pilot a draft federal AI Assurance Framework to support a more consistent approach by agencies to assessing and mitigating the risks of AI use.

These developments follow the Government's recent announcement of an investment of $4.2 million in the 2024–25 Budget to clarify and strengthen existing laws impacted by AI, with reviews in priority areas of healthcare, consumer and copyright law.

Although not published by the Government, the Director's Guide to AI Governance published by the AICD, the Human Technology Institute and the University of Technology Sydney, is another useful resource.

This Insight looks at the themes emerging from the Voluntary Standard and the Proposals Paper and the direction the Government is taking to establish a framework for safe and responsible use of AI in Australia.

Voluntary AI Safety Standard

Takeaways

  • A response to feedback: the Voluntary Standard was published in response to feedback that corporate Australia requires clearer guidance and greater consistency when it comes to developing and deploying AI.
  • 10 voluntary guardrails: it contains practical guidance on how Australian organisations should safely and responsibly use, and innovate with, AI. The standard comprises 10 voluntary guardrails that apply to organisations across the AI supply chain, including both AI developers and AI deployers. However, because AI deployers are more prolific in Australia, the standard focusses on the latter. More detailed guidance for AI developers will be included in the next iteration of the standard.
  • Why does it matter?: the Voluntary Standard is significant because:
    • it (expressly) foreshadows the principles likely to be adopted in mandatory legislation once introduced, at the very least in respect of 'high-risk' use cases; and
    • we expect regulators (including the OAIC, APRA, ASIC, the ACCC and eSafety Commissioner) will look to the standard in enforcing existing principles-and-risk-based regulatory regimes in connection with AI harms.
  • Global interoperability: the Government has emphasised that the standard's recommended processes and practices are consistent with current international standards and best practice. It is also aligned with the leading international standard on AI management systems, AS ISO/IEC 42001:2023, and the US standard on AI risk management, NIST AI RMF 1.0. Each requirement in the standard guardrails gives references as to how it is aligned with relevant international and local standards or practices.
  • A human-centred approach: the standard prioritises safety and the mitigation of harms and risks to people (including individuals, groups, communities and societal structures) and their rights. The guardrails contain both organisational-level obligations to create required processes, as well as system-level obligations for each AI use case or AI system.
  • The work never ends: there is a particular emphasis on the need for ongoing monitoring and risk assessment, including for potential behaviour changes or unintended consequences and to ensure risk mitigations are effective. The standard also says it is critical to identify and engage with stakeholders over the life of the AI system for the same reason. This is likely to impose additional responsibility on deployers and end users than traditional technology systems.
  • AI procurement guidance: the standard contains AI procurement guidance, including expectations as to the information AI developers should provide to AI deployers across the AI lifecycle and what should be addressed in their contracts. The standard emphasises the importance of prioritising relationships with suppliers that have sound risk and data management processes and that can support the deployer's adherence to the guardrails. For further guidance on AI procurement, see our Guide to AI Procurement.
  • Transparency: the expectation of transparency across decision-making algorithms and data usage means deployers may need to openly disclose how an AI system functions; develop clear documentation on decision-making criteria; and ensure organisational accountability for any data collected, including from where it has been sourced, how long it is retained and whether it is shared with third parties.
  • Data governance: AI development and deployment presents a variety of data issues, from privacy and cybersecurity, to confidentiality, data usage rights and the retention and destruction (and potentially disgorgement) of data. Increasing regulatory scrutiny of AI deployment is, therefore, likely to prompt renewed focus on the adequacy of data governance frameworks. Upcoming Privacy Act reforms are also likely to require more robust data governance processes and controls, particularly where personal information is used in connection with automated decision making.
  • Community expectations: the standard also emphasises the need to understand community expectations around AI. For instance, if an organisation deploys an AI system that uses data from or about First Nations communities, the organisation should respect the Indigenous Data Sovereignty Principles that draw on article 32(2) of the United Nations Declaration on the Rights of Indigenous Peoples.

The guardrails and questions to ask the business

THE GUARDRAILS

QUESTIONS TO ASK THE BUSINESS

1. Establish, implement and publish an accountability process, including governance, internal capability and a strategy for regulatory compliance

Where in our leadership team does accountability for AI strategy, AI regulatory compliance and the safe and responsible deployment (and use) of AI systems sit? Is this documented? Are those individuals appropriately skilled, empowered and trained?

2. Establish and implement a risk management process to identify and mitigate risks

Do we have a documented risk tolerance for the use of AI systems?

Have we documented a suitable process for conducting AI impact and risk assessments in relation to both internal and third-party developed AI systems? Is that process tailored to address the specific characteristics and amplified risks of AI systems?

3. Protect AI systems and implement data governance measures to manage data quality and provenance

 

Does our data governance framework appropriately address data issues in an AI context, including data quality and provenance, cybersecurity and data usage rights?

Do we have a process for documenting the data usage rights of each AI system, including intellectual property, Indigenous Data Sovereignty, privacy, confidentiality and contractual rights?

4. Test AI models and systems to evaluate model performance and monitor the system once deployed

 

Do we have an AI system test plan that defines specific, objective and verifiable acceptance criteria linked to potential harms?

Do we have a process to test, monitor and continuously evaluate and improve AI systems throughout the lifecycle of our use of AI systems so that we can ensure those systems continue to meet the acceptance criteria and remain fit for purpose and aligned with evolving conditions?

5. Enable human control or intervention in an AI system to achieve meaningful human oversight

 

How do we ensure human control or intervention mechanisms are appropriately integrated into AI deployment?

Have we agreed a plan with our supplier for governance and oversight of the AI system and component, with clear responsibilities between the parties?

6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content

Are we transparent about how we use AI and when we are generating content or using AI?

7. Establish processes for people impacted by AI systems to challenge use or outcomes

Do we have a process to allow users, organisations, people and society impacted by AI systems to challenge AI uses, or contest decisions or outcomes that impact them?

8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks

For AI deployers: how do we ensure our suppliers (and their suppliers) are sufficiently transparent about how the AI system was built and the inherent risks in deployment, so that we can appropriately conduct our own risk and impact management processes? What information do we need to provide our suppliers to assist them to manage the relevant risks?

For AI developers: how do we provide customers with the information they need to deploy AI systems responsibly and safely, while protecting our commercially sensitive information?

9. Keep and maintain records to allow third parties to assess compliance with guardrails

Do we maintain records to allow third parties (including regulators) to assess the suitability of, and compliance with, our AI governance framework?

10. Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness

What processes do we have in place to identify and engage with stakeholders (people and groups) potentially impacted by AI systems, over the life of those AI systems?

What processes do we have in place to address bias, diversity, inclusion, accessibility, fairness and other ethical concerns in our deployment of AI systems?

Consultation on mandatory guardrails for AI in high-risk settings

The Proposals Paper focusses on proposed mandatory guardrails for the use of AI in 'high-risk settings'. As part of its consultation, the Government is seeking views on the proposed mandatory guardrails, how it is proposing to define high-risk AI, and three regulatory options for mandating the guardrails.

  • A pre-market (ex ante) approach: the proposal acknowledges the 'significant uncertainty' as to how and what types of harms may arise as AI technology evolves (particularly agentic AI, which possesses a level of autonomy and can act on its own). Given this, the Government is proposing to impose preventative obligations on those entities across the AI supply chain and throughout the AI lifecycle 'who can most effectively prevent harms before people interact with, or are subject to, an AI system' (eg by requiring that they undertake impact assessments). These ex ante measures would supplement ex post (remedial) regulatory measures to effectively target and mitigate known risks.
  • Defining high-risk AI: the paper proposes to identify high-risk AI in two ways.
    • Principles-based approach: first, it would adopt a principles-based approach to determine whether an AI system presents a high risk, having regard to its known or foreseeable applications and risks. If adopted, this principles-based approach (as opposed to the list-based approach adopted in the EU and proposed in Canada) would require that organisations assess the severity and extent of any adverse impacts by the AI system on individuals and their rights. A principles-based approach is also sensitive to the fact that the risk of harm presented by an AI system may be context-specific.
    • General Purpose AI (GPAI) and agentic AI: second, and in order to address unforeseen applications and risks, the paper proposes that the mandatory guardrails should apply to all GPAI models, being AI models that 'are capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems'. However, if the mandatory guardrails do not ultimately apply to all GPAI models, the paper also seeks views as to: (i) the suitable indicators for defining GPAI models as 'high risk'; and (ii) whether a subset of guardrails should apply to GPAI models. In canvassing the treatment of GPAI, the paper also discusses some of the dangers of the next big wave in AI, 'agentic AI', where AI systems and chatbots can 'autonomously interact with the world with little to no human oversight or intervention'.
  • Alignment with Voluntary Standard: the headline mandatory guardrails in the Proposals Paper almost entirely replicate the guardrails in the Voluntary Standard, with the exception of guardrail 10. The expectation is that steps taken to adhere to the voluntary guardrails now will help organisations prepare for the introduction of mandatory guardrails for high-risk AI. However, the guidance includes further detail and commentary on specific issues, including the application of the guardrails to GPAI, eg in relation to proposed mandatory:
    • guardrail 2 (risk management): developers should take responsibility for addressing risks against all foreseeable use cases by their clients;
    • guardrail 4 (testing): developers of GPAI models must conduct adversarial testing for any emergent or potentially dangerous capabilities;
    • guardrail 8 (transparency): deployers must report adverse incidents and significant model failures to developers, who can then issue improvements to the model; and
    • guardrail 9 (records): any organisation training large, state-of-the-art GPAI models with potentially dangerous emergent capabilities must disclose these 'training runs' to the Government.
  • Guardrail 10 (conformity assessments): whereas guardrail 10 in the Voluntary Standard requires engagement with stakeholders, proposed mandatory guardrail 10 in the Proposals Paper would require that organisations undertake conformity assessments that demonstrate they have adhered to the guardrails (and any legal requirements) for high-risk AI systems. If the mandatory guardrails are adopted, conformity assessments will need to be undertaken: (i) before putting a high-risk AI system on the market; (ii) periodically to ensure continued compliance; and (iii) if the deployer retrains the system or the system undergoes any changes that significantly affect compliance with the guardrails. Once completed, the organisation will attain certification of compliance which they can communicate to the public.
  • Regulatory options: the paper suggests that AI shares similarities with certain past waves of technology development like advances in gene technologies, nuclear technologies and transport technologies—all of which have, in certain cases 'necessitated technology-specific infrastructure and rules to ensure safety and community expectations are also respected'. The Government has identified three regulatory options for mandating the guardrails, being:
    • a domain specific approach: ie amending existing regulatory frameworks to introduce additional guardrails on AI;
    • a framework approach: ie introducing framework legislation with associated amendments to existing legislation; or
    • a whole of economy approach: ie introducing a new cross-economy Australian AI Act. The standard stresses that any new Act should be harmonised with existing legal and regulatory frameworks, similar to the approach taken in Canada's Artificial Intelligence and Data Act.
  • Alignment with global regulatory approaches: the Government argues that establishing guardrails that apply to AI developers and AI deployers would bring it into closer alignment with the EU, as well as with the proposed approaches of Canada and the United Kingdom (all of which, along with Australia, are signatories to the Bletchley Declaration—for more information see our Insight: Why everyone is taking about AI safety and security). Focussing in particular on the most advanced AI systems would also build on commitments developers have made through the Hiroshima AI Process Code of Conduct and the Frontier AI Safety Commitments. The Government has also emphasised the importance of preserving Australia's local needs and context, including Caring for Country, ICIP and Indigenous Data Sovereignty and ensuring that AI systems are culturally appropriate.

Next steps

Organisations that develop or deploy AI should consider adhering to the Voluntary AI Safety Standard, and at the very least perform a gap analysis between the standard and their own AI governance framework.

They should also consider making a submission to the Proposals Paper. Submissions are due on 4 October 2024.

Organisations should also keep an eye out for the upcoming Privacy Act reforms which, though likely to be significantly pared back, are likely to address the use of personal information in the context of automated decision making.