by Régis Roba
- Private Sector Belgium Business Unit Director
If there is one industry that is hugely data-driven, handling hundreds of billions of transactions a day, and therefore prone to benefit from Artificial Intelligence's propensity for Big Data processing, it has to be the banking and finance industry. Ironically, that same industry is also highly risk-averse and, as a consequence, subject to a host of regulatory requirements meant to mitigate any risks. Requirements that AI does not always meet (yet) at this stage, thereby seriously limiting its value and usability in that specific industry.
Take, for instance, the interpretation capabilities that Artificial Intelligence possesses. If used unchecked, these capabilities can lead to difficulties in meeting regulatory requirements. Therefore, financial institutions should review the results of these interpretations through scenario analysis and backtesting in order to log calculation results, such as risk correlations, default and loss probabilities, and default and loss levels.
Explainability is key
When using Artificial Intelligence, banks should always be able to trace how and why they came to their conclusions. With systems increasingly more automated and intelligent, there is a growing opportunity for AI solutions to make decisions or recommendations that are difficult or even impossible to understand. For instance, in the case of credit risk control, credit advisers should work with the coordinators of AI solutions to develop a decision-control reporting system. Doing so will avoid the need for explanation.
In order to avoid such scenarios in the banking industry, banking supervision makes regulatory demands. These create the framework conditions for the proper handling of Artificial Intelligence. A tricky task will be to reconcile AI applications with the standards of banking supervision. This will require completely new regulations specifically for these applications, which are already being discussed today.
Documentation is a duty.
The duty to process documentation is another important requirement that will help contain the risks associated with Artificial Intelligence. It ensures that banks and their decision-makers have all the information necessary for risk management available to them in full and exact form, including all the information about the functioning of their AI applications. The entire documentation process should be an integral part of risk management: all essential formulas, parameters, variables and computational algorithms should be documented. And all this documentation should be written according to the requirements of traceability, verifiability, completeness and correctness.
Documenting the objectives and approach of Artificial Intelligence and logging the data that it uses to derive correlations is a beginning. But this process documentation alone is not sufficient to explain AI applications. A project to make AI applications more transparent and comprehensible is already known today under the term Explainable AI (XAI). You can find out more about it in this previous blog post.