Understanding AI washing, job displacement risks, ESG reporting opportunities and more 10 min read
The widespread (and growing) deployment of generative AI across all sectors will have a deep and lasting impact on almost every aspect of our lives—from helping to predict supply and demand for renewable energy, and reshaping workplaces and workforces, to enhancing education and healthcare. But the opportunities afforded by genAI won't always offset the risks to both the environment and society.
Given Environmental, Social and Governance (ESG) considerations are an increasing priority for organisations globally, and given the current pace of AI deployment, organisations should review their ESG governance frameworks to ensure that the challenges and risks presented by genAI are adequately addressed. They should also consider the opportunities presented by genAI to enhance their ESG programs.
This guide outlines key ESG risks and impacts companies deploying AI should consider, the questions they should be asking, and what to do in response.
Key takeaways
- Allocate responsibility for the governance of AI-related decisions and processes at organisational as well as individual levels, as outlined in our AI Governance Toolkit.
- Embed ESG considerations into your AI decisions and processes and vice versa. For example:
- Ensure that AI impact assessments incorporate appropriate consideration of ESG matters and that AI deployment is taken into account in human rights assessments and human rights policies.
- Consider any risks to the rights of workers who can be exploited in the development and training of AI.
- Be accurate when describing how AI is used in your business, to avoid 'AI washing', and subsequent regulatory enforcement or private litigation. Ask your suppliers about their own development, training and use of AI, and include AI providers in supply chain mapping and risk assessments.
Who in your organisation needs to know about this?
Boards, in-house / general counsel, technology team, sustainability team
Environmental impact
The training and use of AI models is highly power intensive, and can materially impact an organisation's overall carbon footprint. Experts estimate that:
- by 2027, the AI sector could consume the same amount of energy and water consumed annually by the Netherlands; and
- AI will cause the number of data centres, which store vast training datasets, to double globally in the next ten years, roughly equating to the electricity consumption of Japan.1
Of course, these power and water impacts should be balanced against any decarbonisation gains (eg where the AI solution is being deployed to assist with emissions reduction or more efficient operations).
However, in undertaking this calculation, it is important to consider whether such deployment simply shifts emissions along your business's supply chain, without addressing underlying environmental impacts. The emissions generated by AI usage could also count towards an entity's Scope 3 emissions under the Greenhouse Gas Protocol.
Note: Scope 3 emissions are all indirect emissions occurring in the value chain of the reporting company, including both upstream and downstream emissions.
For your organisation
Ask | Actions |
---|---|
|
|
AI washing
Organisations are increasingly being held to account by both regulators and civil society for greenwashing (the overstatement of sustainability claims), and bluewashing (the overstatement of claims regarding social responsibility, including human rights). Stakeholders expect truthfulness and accountability from businesses, and this is extending to claims about other fields of interest, including AI.
Regulators globally have already begun targeting companies for 'AI washing' – the overstatement of a business's AI capabilities. In 2023, the US Federal Trade Commission (the FTC) released guidance for businesses on how to keep AI claims in check.2
In March 2024, the US Securities and Exchange Commission (the SEC) charged two investment advisers for making false and misleading statements in relation to public claims they had made about the use of AI.3 The SEC alleged that the advisers had misrepresented the extent to which AI was incorporated into their investment processes, among other alleged misrepresentations.
The SEC's enforcement action followed its publication of an Investor Alert, Artificial Intelligence (AI) and Investment Fraud, earlier in the year. In the Alert, the SEC noted that companies may make claims about how AI will affect business operations and increase profits, and bad actors may use buzzwords associated with AI in order to lure customers. The SEC advised that, in light of growing interest in emerging technologies like AI, investors ought to carefully review companies' disclosures before making investment decisions, including by comparing companies to their peers, to assess the risk of inaccurate statements.
While Australian regulators have not yet taken action in relation to AI-related representations, ASIC's focus on greenwashing and bluewashing, and public comments on the topic (including recent statements from the ASIC Chair)4, demonstrate that emerging ESG topics such as AI are on regulators' radars and may come under scrutiny going forward.
For your organisation
Ask | Actions |
---|---|
|
|
Job displacement, exploitation and wellbeing
Generative AI deployment has the potential to transform workforces and workplaces, but it can also present a range of risks for workers, including:
- Job displacement – the automation capabilities of generative AI could lead to job displacement in certain sectors, raising social sustainability issues related to employment and income inequality.
- Exploitation – workers (throughout the technology supply chain) can be exploited to undertake the labour-intensive work of training AI models.
- Wellbeing – the training of AI models can expose workers to potentially harmful content.
For your organisation
Ask | Actions |
---|---|
|
|
Bias and discrimination
If not properly managed, generative AI may inadvertently perpetuate or exacerbate existing biases present in the training data. This could result in discriminatory outputs that harm individuals or groups. In practice, this also means that "AI harms tend to disproportionately affect vulnerable and marginalised communities".5
In the US, several cases have been brought alleging discrimination in the way that AI-powered tools operate – including cases against companies that use AI when hiring and screening job applicants, assessing insurance claims and assessing rental applications.6 Further, in a joint statement, several US regulators (including the Justice Department and Federal Trade Commission) have recognised that AI has the potential to cause unlawful bias and discrimination through unrepresentative or skewed datasets, and operating processes based on inaccurate assumptions, among other factors.7
Australian businesses could similarly be exposed to discrimination claims if the risk of bias and discrimination in outputs is not mitigated appropriately.
For your organisation
Ask | Actions |
---|---|
|
|
ESG reporting
As ESG reporting obligations continue to expand, companies might be considering how AI could be used to comply with mandatory reporting obligations and assist with the preparation of voluntary disclosures. For example, AI could assist with:
- collating data from diverse sources (eg from the company's internal documentation and previous reporting, as well as data provided or made publicly available by third parties in the company's value chain); and
- analysing that data (eg identifying patterns and trends, or summarising relevant information).
With Australia's proposed mandatory climate-related financial disclosure regime on the horizon, companies will be preparing to utilise mandatorily disclosed third-party data to inform decision-making, and might be contemplating how AI can be used to assist with this exercise.
For your organisation
Ask | Actions |
---|---|
|
When determining whether (and the extent to which) to use AI in these processes, have regard to eg:
|
Download guide
This guide has outlined key ESG risks and impacts companies deploying AI should consider, the questions they should be asking, and what to do in response. Download a PDF version to save a copy for later.
Footnotes
-
See Federal Trade Commission Business Blog – Keep Your AI Claims in Check.
-
A Director's Introduction to AI, AICD and UTS Human Technology Institute, June 2024.
-
Huskey v State Farm Fire & Casualty Company, Open Communities and Richardson v Harbor Group Management, and Mobley v Workday.
-
Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.