Greater regulation of AI - a global trend 10 min read
The Australian Human Rights Commission (AHRC) has published a discussion paper on proposals to legislate for a human rights approach to artificial intelligence (AI) systems. If adopted by the Australian Government, these proposals will create additional compliance obligations for organisations creating and deploying AI systems, and assign liability for when AI goes wrong. In this article we analyse the likely impact of these proposals on your business.
Jump to
- Key takeaways
- Who in your organisation needs to know about this?
- Understanding the context of the inquiry – ubiquitous AI and a global shift towards ethical AI
- Mandated transparency and explainability of AI-informed decision-making systems
- Increased certainty around liability for AI and more causes of action
- (Potentially mandatory) human rights impact assessments for AI-informed decision making
- More regulation and guidance to come
- Actions you can take now
Key takeaways
As AI-enabled systems become more prevalent in business, the AHRC's proposals provide an insight into the potential risks that should be considered when building your technology and data strategy.
The AHRC's list of proposals is extensive and includes initiatives targeted towards various industries, contexts and technologies, such as facial recognition. Whilst these proposals are not yet in their final form, they are the product of significant consultation by the AHRC, and so are unlikely to vary greatly.
The proposals we see as the most far-reaching across numerous contexts are:
- Mandated transparency and explainability: The AHRC proposes organisations notify individuals where AI is materially used in a decision that has a legal, or similarly significant, effect on the individual's rights, and that organisations explain AI-informed decisions, including explaining the reasons for the decision. If enacted, these requirements may add extra steps to the development phase of AI and processes around deployment.
- Increased certainty around liability for AI, and more causes of action: One of the AHRC proposals would add greater certainty to liability for harms caused by AI, recommending a rebuttable presumption that the legal person who deploys an AI-informed decision-making system is liable for the use of the system. The AHRC also proposes the Australian Government introduce a statutory tort for serious invasion of privacy.
- (Potentially mandatory) human rights impact assessments: The AHRC has proposed a human rights impact assessment (HRIA) tool for AI-informed decision making, and seeks submissions on whether HRIAs should be mandatory.
The AHRC's proposals contain a range of further suggestions for guidance on how to comply with human rights obligations in AI, and further suggestions for binding and non-binding regulation.
Who in your organisation needs to know about this?
Legal, risk and compliance. The proposals, if adopted by the Government, impose a new raft of compliance obligations upon organisations creating or deploying AI systems. The legal, risk and compliance teams will be responsible for implementing processes to address additional compliance obligations and ensure that shifting liability allocations are accounted for in future contracts. This is particularly relevant in organisations where AI use is likely to have an impact on the individual, such as banks and insurers who use algorithms for assessing eligibility for products and medical technology companies who use algorithms to diagnose individuals. These early use cases for AI systems are likely to provide a testing ground for what constitutes acceptable use of AI.
Board and senior management. The proposals will have ramifications on liability for, and costs of, AI systems. Understanding the regulatory landscape will be critical to organisations that wish to develop a robust data strategy. For those looking to gain an edge, our AI Toolkit provides practical guidance for AI projects.
Understanding the context of the inquiry – ubiquitous AI and a global shift towards ethical AI
The AHRC recognises the increasing prevalence of AI and its capacity to create significant benefits for individuals, but also its potential to cause harm.
Whilst thinking about AI may conjure images of sentient beings such as the replicants in Bladerunner, AI is simply the use of technology to carry out a task that would normally need human intelligence and includes the use of algorithms for simple tasks. As our world becomes more data-saturated, the building blocks for AI and AI itself are becoming more prevalent. AI's ability to digest and analyse data in large quantities lends itself to streamlining many processes, including screening product applications in the insurance and financial services industries and detecting fraud. The AHRC recognises the increasing prevalence of AI and its capacity to create significant benefits for individuals, but also its potential to cause harm.
The AHRC inquiry comes at a time of significant thought leadership in the field of ethical AI use. The OECD Principles on Artificial Intelligence, the European Commission High-Level Expert Group on Artificial Intelligence's 'Ethics Guidelines for Trustworthy Artificial Intelligence' and the Montreal Declaration for the Responsible Development of AI are just a few of the many initiatives demonstrating the global shift towards ethical AI. However, amongst this throng of initiatives, the AHRC's proposals stand out for their reference to existing human rights frameworks as the relevant standard, and for moving beyond a voluntary framework for ethical AI.
Set out below are the key proposals likely to have business impacts.
Mandated transparency and explainability of AI-informed decision-making systems
Mandated notification of AI-informed decision making
Proposal 5 is for the Australian Government to enact a requirement to inform an individual where AI is materially used in a decision that has a legal, or similarly significant, effect on the individual's rights. Although the proposal refers to decisions with legal effects, the AHRC suggests this requirement should apply to both government and non-government entities. It is designed to increase transparency in AI use. At this stage it is unclear what model this notification would take, although it may be something akin to a collection notice under privacy law.
Legislating for explainability of AI-informed decision making
The AHRC proposes that individuals be given a legislative right to:
- a non-technical explanation of AI-informed decisions, which would be comprehensible by a lay person, and
- a technical explanation of AI-informed decisions capable of verification by an expert (Proposal 7).
This right would only operate in situations where the individual would be entitled to an explanation if the decision were not made using AI. As explained below, the precise scope intended by the AHRC is unclear. However, on its face, this right would apply to companies in situations such as refusals of insurance products,1 or in certain circumstances where a consumer enters a credit contract or consumer lease, or where a credit limit is increased.2 Where an AI-informed decision-making system does not produce reasonable explanations for its decisions, that system should not be deployed in any context where decisions could infringe the human rights of individuals (Proposal 8).
If implemented, these proposals would add an extra layer of difficulty to the development of decision-making AI.
If implemented, these proposals would add an extra layer of difficulty to the development of decision-making AI. The AHRC recognises the current debate about the extent to which some forms of AI-informed decision making are capable of explanation. In response, it suggests centres of expertise should prioritise research on how to design AI-informed decision-making systems to provide a reasonable explanation (Proposal 9). Once explainability is built into the AI algorithm, organisations will need to ensure their grievance mechanisms allow individuals to request an explanation.
It is worth noting that whilst the language of the proposals does not limit the application of these proposals to government decision making, the AHRC's more detailed discussion focuses on the government context. As such, the extent to which these legislative requirements would be applied to the private sector is unclear.
Increased certainty around liability for AI and more causes of action
A presumption that the person deploying AI is legally liable for use of the system
Proposal 10 calls for legislation creating a rebuttable presumption that the legal person who deploys an AI-informed decision-making system is liable for harm caused by the system. The aim of this would be to attribute liability to the organisation responsible for the decision. This would bring clarity to the currently patchy AI regulatory landscape created by tortious liability and consumer protection laws (for more detail on the existing regulation of AI, please see Chapter 3 of our AI Toolkit.) Any changes to liability at law will need to be accounted for in liability clauses in contracts for AI systems so that litigation risk can be appropriately managed.
Litigation of AI has the potential to raise IP concerns where disclosure of the processes behind the AI algorithm is part of discovery. In recognition of this concern, the AHRC is seeking responses to the question: 'does Australian law need to be reformed to make it easier to assess the lawfulness of an AI-informed decision-making system by providing access to technical information used by the AI system such as algorithms?' (Question C).
Statutory cause of action for serious invasion of privacy
Whilst only relating to one human right, the AHRC proposes the Australian Government introduce a statutory cause of action for serious invasion of privacy (Proposal 4). The Government has already suggested it will launch a review into this within the year in response to the same recommendation from the Australian Competition and Consumer Commission's Digital Platforms Inquiry Final Report (for more on the privacy ramifications of the ACCC's Digital Platforms Inquiry Final Report, see our article here). If enacted, this would be a major change to Australia's privacy environment, where no individual cause of action exists under privacy law, and could accelerate development of the emerging privacy class action space.
(Potentially mandatory) human rights impact assessments for AI-informed decision making
The AHRC has also proposed the Australian Government develop a human rights impact assessment tool for AI-informed decision making and associated guidance for its use (Proposal 14). Work in this area is already underway in other jurisdictions and may provide an early indication of what would be developed in Australia. The Danish Institute for Human Rights is currently leading a project to develop tools and practical guidance for assessing human rights impacts connected to digital business activities.
As part of the discussion paper, the AHRC is seeking submissions on when and how HRIAs should be deployed, whether they should be mandatory, the consequences if the assessment indicates a high risk of human rights impact and how it should be applied to systems developed overseas (Question E).
More regulation and guidance to come
In the discussion paper the AHRC recognises the difficulty organisations can face in complying with regulation where there is a lack of certainty as to how it applies. To address this they are consulting on the following proposals:
- The Government should develop a national strategy on new and emerging technologies (Proposal 1).
- The Government should commission an appropriate independent body to examine ethical frameworks for new and emerging technologies, assess the efficacy of existing ethical frameworks and identify opportunities to improve their operation, such as the development of ethical guidance for data scientists and software engineers (Proposal 2).
- Any standards applicable to AI-informed decision making in Australia should incorporate guidance on human rights compliance (Proposal 12).
- The Government should establish a taskforce to develop the concept of human rights by design in the context of AI-informed decision making and either a voluntary or legally enforceable certification scheme (Proposal 13).
- The Attorney-General should develop a legally binding Digital Communication Technology Standard under section 31 of the Disability Discrimination Act 1992 (Cth) for goods, services and facilities that are used primarily for communications and are available to the public (Proposal 29).
Actions you can take now
Although the AHRC proposals are not in their final form, they add to the global trend of considering greater regulation of AI. To ensure you keep pace with this conversation and are prepared for any changes, you may consider:
- writing a submission to the discussion paper before the deadline on 10 March 2020;
- incorporating a human rights approach in your technology policies;
- establishing a human rights impact assessment processes for major technology projects;
- carefully scrutinising liability clauses in contracts for AI systems;
- scrutinising data sources to ensure you aren't using biased data sets;
- ensuring appropriate and effective grievance mechanisms; and
- monitoring developments in this sector.
Footnotes
-
Insurance Contracts Act 1984 (Cth) s75. Insurers subject to the General Insurance Code of Practice 2014 (and the incoming Insurance Code of Practice 2020) are also required to give reasons if they refuse to provide insurance.
-
National Consumer Credit Protection Act 2009 (Cth) ss 120, 132 and 155.