How can financial services firms leverage AI for competitive advantage while mitigating risk? 

Risk management in the age of AI
Picture of Jonathan Middleton

Jonathan Middleton

Director, Financial Services

The application of Artificial Intelligence (AI) in financial services has the potential to revolutionise the way firms operate, offering them improved data and analytical insights, increased revenue generating capabilities, greater operational efficiency and productivity, enhanced risk management and controls, and enhanced fraud and money laundering prevention.
However, it is important that firms consider the potential risks associated with the use of AI in order to ensure that these benefits are realised without compromising the safety and soundness of the financial system and the interests of consumers. The Bank of England (BoE) survey has indicated that the adoption of AI within financial services is likely to continue to grow, in part due to increased availability of data, improvements in computational power and the wider availability of AI skills. In this context the Bank of England (the Bank), the Prudential Regulation Authority (PRA), and the Financial Conduct Authority (FCA) released a Discussion Paper on Artificial Intelligence and the recent announcement by the UK government for an AI Sandbox. They are also looking at how they could leverage AI to help meet their respective statutory objectives and other functions.
One of the topics the Discussion Paper specifically explores is governance and oversight, noting ‘Good governance is essential for supporting the safe and responsible adoption of AI’. The SM&CR framework already outlines senior management accountability and responsibility, and the UK regulators have asked for views about the potential for a dedicated Senior Management Function for AI. Regardless of whether a Senior Management Function for AI is created, firms will want to ensure they have clear risk management processes in place.
Many firms are already bringing their AI work into governance frameworks and processes, including hiring Data Ethics and AI leads, and establishing Data Ethics boards. Another way firms can manage AI and ML risk (and data ethics risk) is through using Digital Sandboxes as part of their product development and risk oversight functions. Digital Sandboxes provide firms with a secure environment to develop and test their products. Using synthetic data and a Digital Sandbox through Digital Transformation Platform, firms can test and iterate their products before releasing them to the public, developing AI and ML models that are more accurate and reliable, while minimising risk of bias. Having this innovation visible through a single platform de-risks innovation and gives senior managers oversight across the firm.
The release of the Discussion Paper signals there is a clear expectation that AI and ML will continue to remain central to financial services firms’ digital transformation. It is also clear that while regulators are keen to support responsible AI, they are conscious of the impact of AI on financial services, and are focused on the role of governance and oversight as a tool to ensure the industry can benefit from AI while maintaining risk management. Senior managers will need to consider how they begin to implement the right tools and platforms tools to manage risk and ensure their firms are using AI responsibly.

Get in touch with us

Reach out for inquiries, collaboration, or just to say Hello!