Authors: Andrei I. Cursaru and Orion Forowycz
The Venturers team created an architecture to identify the relevance of each section. Their prototype classified sentences based on its content, making it easier to identify concrete facts versus promotional, vague statements. This relevance map helps users quickly navigate reports, enabling standardized comparisons between companies and incentivizing clarity in ESG reporting.
Dissecting ESG reports by relevance with LLMs
ESG (Environment, Social, Governance) reports made yearly by companies are initially thought of as a way for investors to be able to easily assess how investing in a given company may have indirect environmental or social negative impacts.
However, the interests of the people writing these reports are not aligned with the interests of the people reading them. For companies, the goal is to make themselves look good and to make it as tedious as possible for readers to easily and quickly find the hard facts they are looking for.
As a result, ESG reports drown the facts in an ocean of vague and often meaningless motivational statements about the company’s beliefs, values, long-term goals, and so on. For investors, these digressions carry little to no information about the potential negative impacts of a company, since no company will ever claim that they strive to destroy the planet and mistreat all their workers.
How could we use AI to help us to get more use out of these reports?
Large language models (LLMs) like GPT-4o can do many things, but they can’t directly tell us if the information that a company reports is true, partial, biased or irrelevant. They can’t simply tell us if and how much greenwashing fills the pages of an ESG report, because the information to precisely do so is not out there.
During the hackathon Sustainability meets LLMs organised by AIM – AI Impact Mission and supported by TIMETOACT Group Österreich, the team The Venturers came up with a simple idea to make ESG reports more useful: They process each sentence of a report with an LLM to evaluate its relevance and usefulness to investors, in order to generate a "relevance map" of the report and summary statistics of relevance, and to be able to compare reports of different companies.
The prototype parses a PDF document page by page and sentence by sentence (including figures provided) and feeds it to a model (in our tests, GPT-4o) via the respective API. The model is prompted to answer a set of pre-defined questions by yes or no, such as:
- "Does this statement or figure say something about the values or beliefs of the company?"
- "Does this statement or figure state a quantified, concrete fact about the company?"
This output is then used to classify the string in one of more categories: beliefs & values, goals and missions, quantified hard facts about the company, qualitative facts about the company, facts unrelated to the company.
By combining these outputs with the length of the statement, they were able to easily infer percentages of facts vs abstract statements for each page, report, or company. This can then be used to:
Create a "relevance map" of the PDF, which outlines for example that facts about the company's environmental progress are only on pages 35-39 out of a 100+ page PDF. This can guide investors to quickly read only the relevant parts of the report, without relying on the report's often not so informative table of contents.
Extract only statements of a desired category into a new document, for example quantified hard facts about the company.
Create conciseness scores for reports based on these proportions.
Compare different companies to each other in a standardised way in terms of how much they try to drown the facts in their reports with decorative text. On the long term this could help to hold companies accountable for clear and readable reports, by giving them an incentive to increase their conciseness score and focus their reports on facts.
Outlook:
The prototype could be improved in various ways, for example by fine-tuning the categories and making more of them, depending on the sections of the report of the type of facts which are stated. Using GPT-4o is convenient for prototyping and for a good understanding of the context, but is likely rather overkill on the longer for this task. Working on using a smaller but still efficient model would improve the solution by reducing costs, energy consumption and the carbon footprint overall.