The intersection of AI and ESG
The intersection of AI and ESG
As artificial intelligence (AI) continues to advance, it is crucial to understand its impacts on Environmental, Social, and Governance (ESG) factors. While AI offers significant benefits such as reduced time and increased productivity, it is important to consider the potential ESG costs. This article explores some of the risks and opportunities associated with AI and provides an overview of its impacts on ESG.
The environmental impact of AI
Large language models, which are used to power AI, have a significant environmental impact due to their high-power consumption. This is primarily because:
- Training these models requires substantial computational resources, which consume a lot of electricity.
- Storing the vast amounts of data used for training and the resulting models also demands considerable energy.
- Each time a user queries an AI, the model needs to process the request, which uses power.
The emissions produced by using AI will be captured in your value chain’s greenhouse gas emissions. It’s essential to consider not only the direct emissions from AI operations but also the indirect emissions associated with the production and disposal of AI hardware. Additionally, organisations should evaluate the energy sources powering their AI systems and consider an environmental impact assessment to enhance the sustainability of their AI initiatives.
Key considerations for organisations assessing the environmental impact of AI:
- Emission tracking: How will the emissions produced by your AI systems be accurately measured and reported within your value chain’s greenhouse gas emissions?
- Benefit analysis: How do the potential benefits of AI, such as improving operational efficiency, compare to its environmental costs?
- Sustainable practices: What sustainable practices can you implement to mitigate the environmental impacts of your AI systems?
Measure your carbon footprint
If you’re interested in tracking your organisation’s carbon emissions, including establishing organisational and operational boundaries, and an overview of the different scopes of emissions, you should attend our carbon accounting masterclass.
Community and workforce
AI’s transformative power in the workplace has been widely reported, highlighting its ability to automate time-consuming tasks, generate reports, and analyse large data sets to identify trends and anomalies. However, there are concerns about job displacement and reducing workforce needs, particularly in sectors like retail, medicine, law, media, creative and arts and data analysis.
Key considerations for organisations regarding AI’s impact on the community and workforce:
- Transparency: How can you effectively communicate your AI adoption plans to your workforce to ensure transparency and build trust?
- Responsibility: Who is responsible for the use and implementation of AI?
- Policy development: What boundaries and policies should you establish to guide the ethical and responsible use of AI within your organisation?
- Training: What training programs can you provide to ensure staff are properly and responsibly trained using AI?
Human rights
The societal challenges of training AI models, particularly the need to protect the humans tasked with removing or repeatedly dealing with undesirable content, should not be underestimated. The mental health impacts on these workers have received some attention, but there are also troubling instances of exploitation. Workers are often subjected to labour-intensive tasks with minimal support and inadequate compensation. Addressing these issues is crucial to ensure ethical AI development and to safeguard the wellbeing of those involved.
Even if Australia doesn’t have a reputation for labour exploitation, it can still occur, especially where operations extend across different jurisdictions. To address these ethical challenges and safeguard the wellbeing of those involved in AI development, consider these key questions:
- Worker protection: How can organisations ensure that the workers tasked with training AI models, are protected from mental health impacts and exploitation?
- Support and compensation: Are you providing adequate support and fair compensation to the workers involved in the labour-intensive tasks of AI training?
Chartered Accountants Australia New Zealand, made a submission as part of their ‘Safe and Responsible AI: the opportunities, risks and regulation’ for an international standard to protect workforces who train AI models and adopt the Guiding Principles on Business and Human Rights issued by the United Nations.
Integrity and information
As with all emerging technologies, robust governance and integrity are essential to prevent system misuse. It’s important to remember that generative AI tools do not ‘think’; they generate responses based on complex algorithms applied to the sequence of words in a prompt.
To assist, we recommend that your organisation consider the following:
- AI use policy: Have you developed a flexible AI use policy or guidelines to leverage AI’s benefits while managing its risks?
- Usage boundaries: Have you established clear boundaries for AI use? How will it be relied upon? What types of data should not be included in an AI system? Are there specific scenarios where your AI use is deemed unacceptable? If so, make these clear.
- Accountability management: How are you managing accountability? Who is responsible for fact-checking the sources? Remember, a human mind is typically more objective and analytical than a computer, which generates answers based on repeated learning.
- Sustainability claims: How is your organisation ensuring it does not overstate its sustainability efforts? Consider implementing mechanisms to avoid exaggerated, misleading or deceptive claims.
Literacy at all levels of the organisation is key to the safe and responsible use of AI.
Aligning AI with ESG goals
AI has already demonstrated its value in contributing to environmental sustainability through applications such as climate modelling, tracking deforestation, decarbonising industries, and promoting sustainable manufacturing and farming practices. Stakeholders responsible for sustainability within organisations should remain prudent and mindful that inherent risks come with innovation and technological advancements. This is equally true for AI tools, which must be aligned with ESG principles.
With the deadline for mandatory sustainability-related disclosure approaching, companies may consider how AI can assist in this process. If you need guidance on managing these risks and opportunities in a safe and sustainable way, contact us.