Beyond the Buzz: Practical AI Insights for Sustainability Teams

Beyond the Buzz: Practical AI Insights for Sustainability Teams

Introduction

From self-driving cars to customer service chat bots, artificial intelligence (AI) is everywhere. According to Stanford University’s 2025 AI Index Report, in 2024 U.S. private investment in AI rose to over $109 Billion, while 78% of organizations reported using AI, up 55% from 2023. In parallel, governments are stepping up regulations and investments in AI, with U.S. federal agencies introducing 59 AI-related regulations in 2024, more than double in 2023.

To get beyond the buzz, Labrador hosted a panel of AI experts to share where AI can be useful to corporate sustainability professionals. They busted some common AI misconceptions and gave practical recommendations on things to consider when using AI.

Where AI can be useful

AI’s main strength is digesting and summarizing large amounts of text. In the context of preparing sustainability disclosures, it can be helpful with accelerating heavy text-based workflows in a few different ways:
  • Benchmarking peer disclosures to see what others are saying on certain topics.
  • Benchmarking against your own disclosures to check consistency and tone of voice across different reports and websites.
  • Completing first drafts or revisions of things like process memos, disclosure sections, and public commitments and targets.
  • Pressure testing initial ideas and conclusions, such as seeing if AI suggests alignment with the same UN Sustainable Development Goals (SDGs) as you have determined.
  • Checking disclosure gaps against regulations, voluntary frameworks and standards, questionnaire answers and more.
  • Collecting and monitoring data so sustainability professionals can focus on results interpretation, risk management and action.


Workiva AI
is a great example of how useful AI can be as it lets you ask natural language questions about various sustainability regulations, frameworks and standards even as they are updated or amended. Beehive Climate is another good example as its AI software helps companies quickly identify possible physical and transition climate risks and develop first draft disclosures aligned with the recommendations of the former Task Force on Climate-related Financial Disclosures (TCFD).

AI misconceptions

Despite its growing use by companies, many misconceptions swirl around AI and its impacts. From completely replacing entire jobs and professions to destroying or saving company net zero goals, misunderstandings about AI don’t have to get in the way of its thoughtful oversight and strategic use.

For example, according to Becky Darom, Lead Product Manager-AI at Workiva, one of the greatest misconceptions people have about Workiva AI is that it uses private company data without permission. One of Workiva AI’s largest differentiators is that it’s embedded where the work—and therefore context and data—lives. Users, at their discretion, can safely apply AI to their private, non-public data in a governed environment knowing that Workiva AI will not take action without their consent or use this data to train the model. Becky also stresses that Workiva designs for human-in-the-loop from the foundation up, as even the most automated AI lacks the expertise a human has in high-stakes workflows.

Risks and Opportunities of AI

AI promises more than just improvements in productivity. Many are touting its possible environmental benefits, from optimizing power grids and renewable energy management to monitoring ecosystem health to detecting water leaks. But according to the International Energy Association, data centers consumed about 1.5% of all global electricity in 2024, and that is expected to double by 2030.¹ Other possible non-environmental risks include algorithmic bias, misinformation/misuse, some job displacement and disgruntled communities not engaged in data center decisions.

1. International Energy Agency. (April 2025). “Energy and AI Report”.

Tips to keep in mind when starting with AI

1. Remember that humans are still needed.

AI cannot run a sustainability reporting process on its own. While it can be a useful tool, it does not replace human nuance, expertise and creativity, and its output requires human review. During his panel sessions at GreenBiz ’26, Michael Rockwell, Sustainability Project Manager at Hamilton Company, explained how he closely reviews everything his preferred AI tool produces as he works to integrate sustainability across the business. “AI is like Adobe Photoshop,” he advised attendees. “If someone knows you’re using it, you’re using it poorly.” 

2. Be careful not to be “aggressively average.”

 Picking the right AI tool or agent matters, whether you are looking for complex spreadsheet analysis, data collection and monitoring, or text generation. As Adriel Lubarsky, CEO of Beehive Climate, explains, “generic stuff comes out of generic tools.” Current AI models rely on past data and patterns, which could result in generic, low-quality content or “AI slop.” Human intervention is essential to ensure disclosures reflect corporate tone and story and present fresh ideas. AI might keep you at what Adriel calls “aggressively average” by recycling language or keeping you at the baseline of your peers without pushing you to improve.

3. Start thinking governance right away.

According to Daniela Arias, ESG Services National Market Leader at Crowe, AI “creates great value but also great externalities.”

She suggests companies integrate AI into their board oversight, risk management and sustainability disclosures early. Risks to consider fall into environmental (greenhouse gas emissions, water use, land use for data centers, biodiversity loss, etc.), social (job displacement, employee upskilling needs, bias, community impact, etc.) and governance (copyright infringement, cybersecurity threats, fraud, accountability and reputational harm, compliance risk, etc.) categories.

Daniela also recommends documenting things like:

  • How you use AI and decisions made, including AI tool choices.
  • How you ensure AI’s opportunities and risks are discussed at the same time.
  • How you will start measure and track AI’s impacts on your climate footprint.

Responsible AI vs Ethical AI

According to Harvard University:²

Ethical AI refers to an approach to AI that is philosophical and focused on abstract principles (like fairness and privacy) while also examining the broader societal implications of widespread AI usage. For example, researchers investigating AI’s impact on the environment or its potential for workforce disruption are examining AI ethics.

Responsible AI is more narrowly focused on how AI is being used. AI responsibility deals with issues related to accountability, transparency, and regulatory compliance. For example, in a medical research setting, a responsible AI framework would ensure there was sufficient transparency into the AI algorithm to understand and eliminate any biases.

2. Harvard’s Division of Continuing Education. (June 2025 and March 2026). ”Building a Responsible AI Framework: 5 Key Principles for Organizations.

4. Look at what the big AI players are doing.

Google, Salesforce, Microsoft and others have already developed AI-related policies and reports that provide useful best practices to consider. 

Atlassian 00

In its Responsible Technology Principles and No-BS Guide to Responsible Tech Reviews, Atlassian explains how it believes that “responsible technology and responsible AI is a challenge that no one company can solve alone. That is why we are open about our efforts and invite feedback and collaboration.”

Microsoft 02

In its Responsible AI at Microsoft web pages and 2025 Responsible AI Report, Microsoft discusses how it makes responsible decisions about generative AI systems and models, including those related to AI it builds for itself and for customers the company supports in building their own AI.

Google details its “methods for governing, mapping, measuring, and managing AI risks aligned to the NIST framework,” as well as updates on how they are “operationalizing responsible AI innovation across Google” in their AI Principles and 2025 Responsible AI Progress Report.

Google 01

Salesforce lays out its pillars of Ethical AI on its AI Ethics: Principles, Challenges, and The Future of Responsible AI web pages. Their pillars include transparency and explainability (XAI); fairness and non-discrimination; responsibility and accountability; and data privacy, protection and security.

Salesforce 01

Looking forward

Perhaps the best AI advice is the simplest: Rachael Staab, Director of Global Sustainability at Workiva, suggests sustainability professionals “test it, try and get started.”

Looking forward, we anticipate that more companies will use AI to help them automate data collection and validation (including for difficult data across supply chains such as Scope 3 emissions), track metrics like greenhouse gas emissions, monitor regulations and risks, and streamline compliance, among other tasks.

While testing and trying AI tools, we recommend having conversations across the company about how best to integrate AI into governance and risk management and develop ethical and responsible AI policies. As the World Economic Forum explains, “to integrate AI responsibly, firms must establish robust validation processes and interrogate the models they use, while combining AI’s computational power with human judgement, transparency and stakeholder engagement.”

3. World Economic Forum. (September 26, 2025). “How AI can transform sustainability reporting.”

Citations

Harvard’s Division of Continuing Education. (June 2025 and March 2026). ”Building a Responsible AI Framework: 5 Key Principles for Organizations.”

International Energy Agency. (April 2025). “Energy and AI Report.

The Institute for Experiential AI at Northeastern University. (December 2022). ”What is the Difference Between AI Ethics, Responsible AI, and Trustworthy AI?

Okta. (2024). “AI at Work 2024: C-suite perspectives on artificial intelligence.

Stanford University Human-Centered Artificial Intelligence. (2025). ”Artificial Intelligence Index Report 2025.

United Nations Environment Programme. (November 2022). “How artificial intelligence is helping tackle environmental challenges.”

Wenger, Sarah. (March 11, 2026). “US AI Oversight Through Three Lenses: Investor Expectations, the S+P 100 and Company-Specific Analysis.” Harvard Law School Forum on Corporate Governance.

Workday. (2025). “AI Agents Are Here—But Don’t Call Them Boss.”

World Economic Forum. (September 26, 2025). “How AI can transform sustainability reporting.”

PARTAGER :
logo r1
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.