Jonathan Middleton
Director, Financial Services
The application of AI in financial services has the potential to revolutionise the way firms in the finance industry operate.
The most promising areas for progress include offering financial institutions improved data and analytical insights, increased revenue-generating capabilities, greater operational efficiency and productivity, enhanced risk management and controls, and enhanced fraud and money laundering prevention.
However, it is important that firms consider the potential risks associated with the use of AI in order to ensure that these benefits are realised without compromising the safety and soundness of the financial system and the interests of consumers.
What are the recent developments surrounding the use of AI and financial risk management?
The Bank of England (BoE) survey has indicated that the adoption of AI within financial services is likely to continue to grow, in part due to increased availability of data, improvements in computational power and the wider availability of AI skills.
In this context, the Bank of England (the Bank), the Prudential Regulation Authority (PRA), and the Financial Conduct Authority (FCA) released a Discussion Paper on Artificial Intelligence and the recent announcement by the UK government for an AI Sandbox. They are also looking at how they could leverage AI to help meet their respective statutory objectives and other functions.
One of the topics the Discussion Paper specifically explores is governance and oversight, noting ‘Good governance is essential for supporting the safe and responsible adoption of AI’. The SM&CR framework already outlines senior management accountability and responsibility, and the UK regulators have asked for views about the potential for a dedicated Senior Management Function for AI.
How do digital sandboxes empower the use of AI and financial risk management capabilities?
Regardless of whether a Senior Management Function for AI is created, firms will want to ensure they have clear risk management processes in place.
Many firms are already bringing their AI work into governance frameworks and processes, including hiring Data Ethics and AI leads as well as establishing Data Ethics boards. Another way firms can manage AI and ML risk (and data ethics risk) is through using Digital Sandboxes as part of their product development and risk oversight functions.
Digital Sandboxes provide firms with a secure environment to develop and test their products. By using synthetic data and a Digital Sandbox through a Digital Transformation Platform, firms can test and iterate their products before releasing them to the public, developing AI and ML models that are more accurate and reliable, while minimising the risk of bias.
Having this innovation visible through a single platform de-risks innovation and gives senior managers oversight across the firm, establishing an empowering landscape for achieving success in testing and integrating solutions. This offers a more assured pathway for financial institutions to maximise outcomes from AI and financial risk management initiatives.
What should decision-makers in financial firms keep in mind on the way forward?
The release of the Discussion Paper signals there is a clear expectation that AI and ML will continue to remain central to financial services firms’ digital transformation. It is also clear that while regulators are keen to support responsible AI, they are conscious of the impact of AI on financial services, and are focused on the role of governance and oversight as a tool to ensure the industry can benefit from AI while maintaining risk management.