Bank of England Official Delivers Speech on Governance of Artificial Intelligence

On June 4, the Bank of England’s Executive Director of UK Deposit Takers Supervision, James Proudman, delivered a speech on “Managing Machines: the governance of artificial intelligence,” at the United Kingdom Financial Conduct Authority (FCA) Conference on Governance in Banking.

In his speech, Proudman first gave an overview “of the scale of introduction of artificial intelligence in UK financial services.” He noted that artificial intelligence (A) and machine learning (ML)

are helping firms in anti-money laundering (AML) and fraud detection. Until recently, most firms were using a rules-based approach to AML monitoring. But this is changing and firms are introducing ML software that produces more accurate results, more efficiently, by bringing together customer data with publicly available information on customers from the internet to detect anomalous flows of funds.

About two thirds of banks and insurers are either already using AI in this process or actively experimenting with it, according to a 2018 IIF survey.4 These firms are discovering more cases while reducing the number of false alerts. This is crucial in an area where rates of so-called “false-positives” of 85 per cent or higher are common across the industry.

ML may also improve the quality of credit risk assessments, particularly for high-volume retail lending, for which an increasing volume and variety of data are available and can be used for training machine learning models.

But Proudman also cautioned that

[w]e need to understand how the application of AI and ML within financial services is evolving, and how that affects the risks to firms’ safety and soundness. And in turn, we need to understand how those risks can best be mitigated through banks’ internal governance, and through systems and controls.

In that context, he provided some comments on interim results from the survey that the Bank of England and the FCA sent in March 2019 to more than 200 United Kingdom financial firms. First, he characterized the mood concerning AI implementation among firms regulated by the Bank of England as

strategic but cautious. Four fifths of the firms surveyed returned a response; many reported that they are currently in the process of building the infrastructure necessary for larger scale AI deployment, and 80 per cent reported using ML applications in some form.

Second, he commented that

barriers to AI deployment currently seem to be mostly internal to firms, rather than stemming from regulation. Some of the main reasons include: (i) legacy systems and unsuitable IT infrastructure; (ii) lack of access to sufficient data; and (iii) challenges integrating ML into existing business processes.

Not surprisingly, Proudman noted that “large established firms seem to be most advanced in deployment,” with “some reliance on external providers at various levels, ranging from providing infrastructure, the programming environment, up to specific solutions.”

Proudman also stated that 57 percent of the respondent firms regulated by the Bank of England

reported that they are using AI applications in risk management and compliance areas, including anti-fraud and anti-money laundering applications. In customer engagement, 39 per cent of firms are using AI applications, 25 per cent in sales and trading, 23 per cent in investment banking, and 20 per cent in non-life insurance.

By and large, firms reported that, properly used, AI and ML would lower risks – most notably, for example, in anti-money laundering, KYC and retail credit risk assessment. But some firms acknowledged that, incorrectly used, AI and ML techniques could give rise to new, complex risk types – and that could imply new challenges for boards and management.

Based on his observations, Proudman identified three challenges that AI/ML posed boards and management in the United Kingdom financial sector:

Data quality, including the existence of “complex ethical, legal, conduct and reputational issues associated with the use of personal data.”

The role of people, with particular regard to the use of incentives and the introduction of human biases that can affect machines’ output. In that regard, Proudman warned that “it may even become harder and take longer to identify root causes of problems, and hence attribute accountability to individuals,” and stated that “[f]irms will need to consider how to allocate individual responsibilities, including under the Senior Managers Regime.”

Change, including change associated with “the extent of execution risk that boards will need to oversee and mitigate” as the rate of introduction of AI/ML in the financial services sector increases. Here, Proudman stated that “the transition to greater AI/ML-centric ways of working is a significant undertaking with major risks and costs arising from changes in processes, systems, technology, data handling/management, third-party outsourcing and skills.” He also commented that this transition “creates demand for new skill sets on boards and in senior management, and changes in control functions and risk structures,” and “may also create complex interdependencies between the parts of firms that are often thought of, and treated as, largely separate. As the use of technology changes, the impact on staff roles, skills and evaluation may be equally profound.”

From these three challenges, Proudman derived three principles for governance of AI/ML:

“[T]he observation that the introduction of AI/ML poses significant challenges around the proper use of data, suggests that boards should attach priority to the governance of data – what data should be used; how should it be modelled and tested; and whether the outcomes derived from the data are correct.”

“[T]he observation that the introduction of AI/ML does not eliminate the role of human incentives in delivering good or bad outcomes, but transforms them, implies that boards should continue to focus on the oversight of human incentives and accountabilities within AI/ML-centric systems.”

“[T]he acceleration in the rate of introduction of AI/ML will create increased execution risks during the transition that need to be overseen. Boards should reflect on the range of skill sets and controls that are required to mitigate these risks both at senior level and throughout the organisation.”

Note: Governance, risk, and compliance officers in financial firms should read and give careful consideration to the issues that Proudman raises. Media reports have often seized on how AI has sometimes led to counterproductive and misleading results. Proudman’s remarks, however, identify a number of even more complex challenges that senior executives and boards will need to address – not least because United Kingdom regulators can be expected, over time, to hold firms accountable if they do not address the constantly changing array of AI/ML execution risks and the need to maintain individual accountability under the Senior Managers Regime.

Share this:

Like this:

LikeLoading...

Related

Published by Jonathan J. Rusch

I'm a lawyer and consultant interested in corporate- and individual-compliance issues, and an inveterate part-time law professor; a former federal prosecutor, regulator, and anti-bribery and corruption compliance head at a global financial institution; and a (very minor) shareholder in Williams Grand Prix Engineering.
View all posts by Jonathan J. Rusch