INSIGHT

Guide to addressing ESG impacts in generative AI deployment

By Valeska Bloch, Graeme Grovum, Emily Turnbull, Dora Banyasz, Grace Ward, Billy Hade
AI Cyber Environmental, Social & Governance General Counsel Technology, Media & Telecommunications

Understanding AI washing, job displacement risks, ESG reporting opportunities and more 10 min read

The widespread (and growing) deployment of generative AI across all sectors will have a deep and lasting impact on almost every aspect of our lives—from helping to predict supply and demand for renewable energy, and reshaping workplaces and workforces, to enhancing education and healthcare. But the opportunities afforded by genAI won't always offset the risks to both the environment and society.

Given Environmental, Social and Governance (ESG) considerations are an increasing priority for organisations globally, and given the current pace of AI deployment, organisations should review their ESG governance frameworks to ensure that the challenges and risks presented by genAI are adequately addressed. They should also consider the opportunities presented by genAI to enhance their ESG programs.

This guide outlines key ESG risks and impacts companies deploying AI should consider, the questions they should be asking, and what to do in response.

Key takeaways

  • Allocate responsibility for the governance of AI-related decisions and processes at organisational as well as individual levels, as outlined in our AI Governance Toolkit. 
  • Embed ESG considerations into your AI decisions and processes and vice versa. For example:
    • Ensure that AI impact assessments incorporate appropriate consideration of ESG matters and that AI deployment is taken into account in human rights assessments and human rights policies.
    • Consider any risks to the rights of workers who can be exploited in the development and training of AI.
  • Be accurate when describing how AI is used in your business, to avoid 'AI washing', and subsequent regulatory enforcement or private litigation. Ask your suppliers about their own development, training and use of AI, and include AI providers in supply chain mapping and risk assessments.

Who in your organisation needs to know about this?

Boards, in-house / general counsel, technology team, sustainability team


Environmental impact

The training and use of AI models is highly power intensive, and can materially impact an organisation's overall carbon footprint. Experts estimate that:

  • by 2027, the AI sector could consume the same amount of energy and water consumed annually by the Netherlands; and
  • AI will cause the number of data centres, which store vast training datasets, to double globally in the next ten years, roughly equating to the electricity consumption of Japan.1

Of course, these power and water impacts should be balanced against any decarbonisation gains (eg where the AI solution is being deployed to assist with emissions reduction or more efficient operations).

However, in undertaking this calculation, it is important to consider whether such deployment simply shifts emissions along your business's supply chain, without addressing underlying environmental impacts. The emissions generated by AI usage could also count towards an entity's Scope 3 emissions under the Greenhouse Gas Protocol.

Note: Scope 3 emissions are all indirect emissions occurring in the value chain of the reporting company, including both upstream and downstream emissions.

For your organisation 

Ask Actions
  • How might this deployment impact our carbon footprint?
  • Does it simply shift emissions to a different part of the supply chain?
  1. Understand the energy trade-offs and weigh the required output quality against energy use throughout your supply chain. Smaller, more specialised models may be sufficient and consume less power. Generating different outputs (text vs image vs video) has varying energy costs.
  2. Only use energy-intensive AI where appropriate.
  3. Seek out AI vendors that prioritise energy-efficient AI models and sustainable practices, but remain alert to AI greenwashing (ie the potential overstatement of their sustainability claims). If developing or training AI in-house, consider optimising algorithms for energy efficiency or using more energy-efficient hardware.
  4. Reporting companies should consider developing policies to transparently communicate Scope 3 emissions resulting from AI use. Although the methods used to calculate emissions from AI usage are still being developed, we expect to see a push for accountability on these matters. 

AI washing

Organisations are increasingly being held to account by both regulators and civil society for greenwashing (the overstatement of sustainability claims), and bluewashing (the overstatement of claims regarding social responsibility, including human rights). Stakeholders expect truthfulness and accountability from businesses, and this is extending to claims about other fields of interest, including AI.

Regulators globally have already begun targeting companies for 'AI washing' – the overstatement of a business's AI capabilities. In 2023, the US Federal Trade Commission (the FTC) released guidance for businesses on how to keep AI claims in check.2

In March 2024, the US Securities and Exchange Commission (the SEC) charged two investment advisers for making false and misleading statements in relation to public claims they had made about the use of AI.3 The SEC alleged that the advisers had misrepresented the extent to which AI was incorporated into their investment processes, among other alleged misrepresentations.

The SEC's enforcement action followed its publication of an Investor Alert, Artificial Intelligence (AI) and Investment Fraud, earlier in the year. In the Alert, the SEC noted that companies may make claims about how AI will affect business operations and increase profits, and bad actors may use buzzwords associated with AI in order to lure customers. The SEC advised that, in light of growing interest in emerging technologies like AI, investors ought to carefully review companies' disclosures before making investment decisions, including by comparing companies to their peers, to assess the risk of inaccurate statements.

While Australian regulators have not yet taken action in relation to AI-related representations, ASIC's focus on greenwashing and bluewashing, and public comments on the topic (including recent statements from the ASIC Chair)4, demonstrate that emerging ESG topics such as AI are on regulators' radars and may come under scrutiny going forward.

For your organisation 

Ask Actions
  • Do we have and maintain an inventory of the claims we make about our development, use or deployment of AI?
  • Do we have reasonable grounds on which to make those claims?
  • Are we using clear and easy-to-understand language when describing our use of AI?
  • How are we ensuring that our claims are tested and verified to ensure they are accurate?
  1. Create and maintain an inventory of the claims made about the development, use or deployment of AI.
  2. Confirm those claims (and the manner in which they are made) are not misleading or deceptive.
  3. Implement governance mechanisms to test and verify AI-related claims.

Job displacement, exploitation and wellbeing

Generative AI deployment has the potential to transform workforces and workplaces, but it can also present a range of risks for workers, including:

  • Job displacement – the automation capabilities of generative AI could lead to job displacement in certain sectors, raising social sustainability issues related to employment and income inequality.
  • Exploitation – workers (throughout the technology supply chain) can be exploited to undertake the labour-intensive work of training AI models.
  • Wellbeing – the training of AI models can expose workers to potentially harmful content.

For your organisation 

Ask Actions
  • When purchasing AI systems, ask vendors who is responsible for training the AI and how the AI is trained. What safeguards are in place for protecting those workers?
  • Could the implementation of our AI system lead to job losses within our organisation or industry? How can we mitigate this impact? Is our approach compliant with workplace laws?
  1. Modern slavery and other human rights factors should be key considerations when procuring AI systems, both as part of any vendor due diligence and in contractual arrangements.
  2. In health and safety training, address the risks of psychological harm to employees from the development of AI systems. If development of the AI model is outsourced, do due diligence on the AI provider to ensure workers are treated ethically and their wellbeing is protected.
  3. Develop strategies to support a fair transition for employees whose roles might be affected by automation, so that their skills can be redirected effectively elsewhere within the organisation.
  4. Ensure there is appropriate acknowledgement and compensation for workers who have made intellectual contributions even where they have been assisted by AI.

Bias and discrimination

If not properly managed, generative AI may inadvertently perpetuate or exacerbate existing biases present in the training data. This could result in discriminatory outputs that harm individuals or groups. In practice, this also means that "AI harms tend to disproportionately affect vulnerable and marginalised communities".5

In the US, several cases have been brought alleging discrimination in the way that AI-powered tools operate – including cases against companies that use AI when hiring and screening job applicants, assessing insurance claims and assessing rental applications.6 Further, in a joint statement, several US regulators (including the Justice Department and Federal Trade Commission) have recognised that AI has the potential to cause unlawful bias and discrimination through unrepresentative or skewed datasets, and operating processes based on inaccurate assumptions, among other factors.7

Australian businesses could similarly be exposed to discrimination claims if the risk of bias and discrimination in outputs is not mitigated appropriately.

For your organisation 

Ask Actions
  • Do the datasets used for training or testing use 'protected characteristics' or characteristics that act as proxies for protected characteristics?
  • How are we ensuring that our AI system does not reinforce existing biases or discriminatory practices?
  1. Use diverse training datasets to minimise bias, implement rigorous testing for bias in outputs, and establish a feedback loop for continuous improvement.
  2. Ensuring that the team of people training, deploying and overseeing the AI model (as applicable) is diverse can also help mitigate against the risk that biases will be inadvertently embedded.

ESG reporting

As ESG reporting obligations continue to expand, companies might be considering how AI could be used to comply with mandatory reporting obligations and assist with the preparation of voluntary disclosures. For example, AI could assist with:

  • collating data from diverse sources (eg from the company's internal documentation and previous reporting, as well as data provided or made publicly available by third parties in the company's value chain); and
  • analysing that data (eg identifying patterns and trends, or summarising relevant information).

With Australia's proposed mandatory climate-related financial disclosure regime on the horizon, companies will be preparing to utilise mandatorily disclosed third-party data to inform decision-making, and might be contemplating how AI can be used to assist with this exercise.

For your organisation 

Ask Actions
  • How might generative AI help us comply with ESG reporting obligations?

When determining whether (and the extent to which) to use AI in these processes, have regard to eg:

  1. Governance arrangements – are there policies and procedures in place that clearly set out the parameters for acceptable (and unacceptable) AI usage to support ESG and sustainability reporting? How will compliance with these requirements be monitored?
  2. Data / content safeguards – are there safeguards (eg monitoring and oversight mechanisms) in place to ensure that data outputs from AI usage are accurate and suitable for use in preparing ESG and sustainability reporting, or to facilitate internal decision-making, and do not infringe third party rights or trigger unintended open source licensing requirements?
  3. Risk appetite and strategy – does the proposed usage of AI align with the organisation's broader risk appetite and ESG strategy?

Download guide

This guide has outlined key ESG risks and impacts companies deploying AI should consider, the questions they should be asking, and what to do in response. Download a PDF version to save a copy for later.