Artur Mika – Expert in the Commercial Banking Department
The European Union is witnessing the continued macrotrend of regulatory simplification, including in relation to regulations on sustainability, in the face of efforts to increase the competitiveness of the EU economy. Notwithstanding the amendments to regulations in this regard, in particular through the Omnibus I package, regulations such as Corporate Sustainability Reporting Directive (CSRD), Taxonomy Regulation or EBA Guidelines on the management of environmental, social and governance (ESG) risks (EBA/GL/2025/01) still force banks to work with enormous sets of data from various dispersed sources and to process such data under pressure. For this reason, using solutions based on AI systems in banks’ sustainability reporting and, more widely, in ESG risk management, may bring substantial benefits.
Firstly, one of them is aggregation of data from various sources and systems, also when saved in various, including non-standardised, formats (such as scans in .pdf files, excel files). For instance, this may apply to data on loan portfolio’s carbon footprint.
Secondly, data on ESG risks and sustainable development, thanks to the use of AI, are not only retrieved from a number of sources but also analysed in an automated manner, namely without ongoing human intervention in the process. In this regard, it is useful to use tools that enable natural language processing, as this makes it possible to perform an extended analysis of non-financial data, including reports from corporate borrowers, e.g. in terms of Scope 3 GHG emissions. Thanks to that, artificial intelligence undoubtedly contributes to ensuring high quality data on sustainability.
Thirdly, the use of AI in sustainability reporting reduces the risk of human errors, occurring, for instance, when reporting forms are completed manually. AI tools can also enable detection of data inconsistencies, as well as gaps in data and inconsistencies of formats, and they allow for the flagging of such issues as ones that require verification. Moreover, the tools in question may also be used to automatically fill those gaps, in particular by applying estimations and proxies, including on Scope 3 GHG emissions.
Fourthly, artificial intelligence may also be useful for the purpose of modelling climate risk and performing scenario-based analyses and stress tests, which makes it possible, in particular, to examine a potential effect of materialisation of various climate scenarios on the bank’s portfolio in the long-term perspective.
Fifthly, AI may be used to detect greenwashing among banks’ corporate clients, when the signs of such greenwashing might have been missed by analytics, which may result in particular from the enormous number and dispersion of non-financial data whose manual verification would be impossible.
Sixthly, with the support of AI, it is possible to create interactive dashboards enabling real-time analysis of ESG indicators.
In the practical context of bank management, given all the benefits mentioned above, reflection must come concerning risks and challenges related to AI being used by banks (and more broadly – businesses in general) for the purpose of sustainability reporting and ESG risk management.
First of them is the ‘black box’ issue, which means that relations between a bank and its stakeholders may be affected by difficulties related to an entity’s ability to understand and explain the results of analyses generated by AI. In the specific context of mortgage banks’ activities, an example where such risk materialises may be a scenario in which a bank, for the purpose of analysing real property physical risk (e.g. flood risk), uses a deep learning model operating on data about the real estate used as loan collateral. This model, for instance, contributes to the downgrading of the rating for a specific number of single-family houses by assigning high risk of flood to them, and for this reason, the bank should increase the capital buffer for this risk. This situation reflects the ‘black box’ issue as the AI model may not be able to transparently explain the operation of its own complex algorithms but only provide the percentage value of probability that the flood risk will materialise.
Another hypothetical example of the ‘black box’ issue may be a situation where a bank, using AI to support the process of assessing a potential borrower, refuses to grant a ‘green loan’ for an energy-efficient house, and the bank adviser is not able to provide the client with justification of such assessment carried out with the support of AI.
Another potential challenge related to the use of AI in banks’ sustainability reporting is the GIGO principle (garbage in, garbage out), well-known in the world of IT. It is a relevant warning, which may also apply to sustainability reporting by banks. According to this principle, even if the AI model used is expensive and very advanced, it may generate an erroneous report if input data are of poor quality. In other words, if data from banks’ clients contain errors or certain data are missing, such errors may be replicated by AI.
If, for example, a borrower, being a small or medium-sized enterprise, provides a rough estimate of energy consumption or mistakenly discloses this type of data in kWh instead of MWh in the ESG survey received from the bank, the AI model may take it as a fact and, consequently, calculate an overstated carbon footprint for the loan portfolio.
Another potential issue is that the process of AI model learning uses historical data and those data going back 10 years may be incomplete, compiled using various methods or unavailable at all. In a hypothetical scenario in which an AI model forecasts climate risk, lack of natural disasters in a given region in the past does not necessarily mean that there is no risk of their occurrence in the future, but the AI model may assess this risk as low on the basis of historical data.
The GIGO effect may also manifest itself in that banks often purchase databases from external providers in order to fill gaps in their data about clients who do not report on sustainability. The databases may contain data based on general sector-specific average values (e.g. average emission for transportation in Poland) instead of real data. Such average data are then used by AI. As a result, the bank may be financing companies whose activity deviates substantially from sustainability objectives, while believing that they fit into average values for a given company size. From the perspective of banks, this may also involve the risk of unintentional greenwashing, and on the other hand, this may result in refusing to finance a ‘green’ business entity that was misjudged by AI.
The GIGO effect may also be related to the phenomenon of automation bias, namely a psychological propensity to have excessive confidence in suggestions generated by AI, without relying on one’s own knowledge, logic or information from the environment. It is worth emphasising that ensuring the quality of input data is a responsibility of the bank and its staff and not the AI itself.
Another risk may be related to data security, in particular when AI-based tools create security gaps, compromising the security of banks’ data and making their clients vulnerable to leaks. This threat may also result from cyber-attacks (which can be supported by AI as well). For the purpose of processing large databases, cloud computing providers are now used (due to limited computing capacity), which transfers the risk related to protection of such information to risks related to cloud computing.
Apart from the mentioned challenges linked to the use of AI in banks’ sustainability reporting and ESG risk management, additional aspects should also be taken into consideration.
Firstly, the cost of AI implementation and maintenance in line with applicable legislation (in particular the EU AI Act1) is high both for entities that launch such systems in the market and for entities that use them.
Secondly, applying AI to ESG reporting is undoubtedly related to a paradox consisting in that AI models and data centres used for the purpose of reporting environmental goals are highly energy-intensive.
What actions can banks take to mitigate those risks?
One of methods is to apply so-called XAI (‘Explainable Artificial Intelligence’), which uses methods and techniques that allow stakeholders to understand reasons why a specific AI model made a specific decision or generated a specific output. XAI is, therefore, a countermeasure for the ‘black box’ issue.
Another measure that reduces AI-related risks may be ‘data cleaning’, namely verification of input data by applying rules that check if such data stay within logical ranges, cross checking (i.e. confronting data from clients with market data) or conducting an audit not only in respect of the output generated by an AI model but also in respect of the input data. In this context, it is worth highlighting that the implementation of XAI will be required under the AI Act for high-risk systems, including, among other things, systems used in financial institutions to evaluate the creditworthiness or credit score of natural persons2. In the light of challenges discussed above related to using AI in sustainability reporting and ESG risk management at banks, an (optimistic) conclusion can be drawn that a human being is still needed, and even though AI may be an important part of reducing banks’ operational burden, the AI output still requires human supervision, in particular if to be included in decision-making processes.
In this context, in the practical management of ESG area in financial institutions, the optimum approach to follow seems to be the Human-in-the-loop (HITL)3 concept, being a hybrid approach in AI systems, assuming that a human being is actively involved in the decision-making process and supervises, corrects and verifies the output of algorithms’ operations (contributing to model learning) instead of giving AI full autonomy.
______________________