UK regulatory consultation hopes to bridge gaps in AI adoption

The industry forum looks to inform recommendations for the safe use of AI in financial firms.

AI Data

The Artificial Intelligence Public-Private Forum (AIPPF), convened by the Bank of England (BoE) and the Financial Conduct Authority (FCA), is in the final stages of drafting recommendations on the safe adoption of AI for financial services, due to be published in the New Year.

An exact release date for the report has yet to be set. For now, they are not expected to become a binding UK regulatory framework for the adoption of AI, says Shameek Kundu, a member of the AIPPF.

Rather, financial firms can expect AI nuances and quirks to be embedded in existing rules, Kundu says.

The central bank and the regulator launched the AIPPF on October 12, 2020. The initiative aims to provide a forum for discussion between the public and private sectors on the safe use of AI in financial services, and includes representatives from the insurance, banking, technology, and academic sectors.

“The AIPPF has allowed us to learn a great deal about the key issues related to the use of AI in financial services, including what it means for us as a central bank,” Dave Ramsden, director general for markets and banking at the BoE and co-chair of the AIPPF, tells WatersTechnology. “We hope this type of dialogue and the AIPPF final report will advance the debate on how best to support the safe adoption of AI in financial services.”

The AIPFF meets once quarterly for consultative sessions and workshops. Its discussions focus on topics such as the challenges of model risk management, data governance, and accountability for the outcomes produced by AI.

Regulators around the world are showing concern about the growing interest in AI from financial services. The UK supervisors’ initiative follows the European Union’s proposals for regulating the use and governance of AI, published in April 2021. Similarly, the Monetary Authority of Singapore (MASreleased principles for the use of AI and analytics in decision-making in financial products and services in 2018. And in March of this year, the Federal Reserve and other US prudential regulators issued a proposal asking for comment on the use of AI in financial institutions.

The Bank of International Settlements said in a report in August 2021 that most regulatory frameworks are still in the early stages of development, and take different forms, ranging from principles-based corporate governance requirements on AI, to non-binding guidelines on how to manage AI governance risks. Banks may struggle to move from experimentation in AI to scaling the technology across the enterprise, in part due to a lack of clarity on how to implement a safe AI strategy.

Also, the Covid-19 pandemic has exacerbated the need for clear guidance on how to adopt AI solutions, as banks and institutions have been forced to shift portions of their economies online and accelerate their digital transformations.

According to the minutes taken at the first AIPPF meeting in 2020, forum members said Covid-19 has accelerated the pace of automation and adoption of AI in their firms. These members argued that “firms need to keep up with appropriate controls and focus on the resilience of AI systems in the short term” to handle the unpredictable nature of the pandemic, working on auditing AI algorithms. In the longer term, they will also need to think about AI and data management in the context of their wider technology infrastructure, as well as adjusting risk management processes accordingly, the minutes said.

Data drives AI, and data governance was an area of focus during the forum’s conversations—particularly the governance of alternative data. Many firms are using alt data in their models, but it is often unstructured, costly to cleanse, and more challenging to validate.

AIPPF’s Kundu is head of financial services and chief strategy officer at Truera, a California-based start-up that builds and tests AI solutions. He was chief data officer (CDO) at Standard Chartered Bank for almost 12 years until December 2020. Kundu says more and more financial firms are using external third-party data sources, such as satellite information, to train their machine learning (ML) models, and that presents new challenges.

“When you are using data that is not owned by either you or your customer but has been bought from a third party— whether it’s from credit bureaus, telecom providers, or other—that brings additional challenges around the reliability of that data, around privacy consent and the ethics of using that data,” he says.

Kundu says existing rules could be adapted to improve AI data governance. Financial firms must already comply with standards and regulations around data quality and governance across their organizations. The EU’s General Data Protection Regulation (GDPR), for example, has been transposed into UK domestic law; BCBS239, a global standard used to strengthen banks’ risk data aggregation and reporting, has been in place since 2016.

However, these kinds of wider standards have limitations when it comes to the specificities of AI, he adds, as they don’t cover, for example, mitigating bias within data that is fed into AI models, or using limited training datasets.

“Having accurate data is not enough: Representativeness is also important. If you’re going to do a model that works across all the UK, then don’t just collect data from the southeast. You must collect a much more representative sample,” Kundu says.

Regulators could also adapt existing rules to assign accountability to senior managers for AI outcomes. The FCA’s Senior Managers and Certification Regime (SM&CR), for instance, already assigns personal accountability to senior executives at banks and insurance firms for upholding governance standards.

Kundu says internal rules and policies are typically drafted by one senior manager in a bank. This person would be responsible for setting the framework for how the AI model is used and governing the data used to train it. If something like the SM&CR were adapted for AI accountability, it would have to be carefully designed, however.

“If we say everyone is accountable, then nobody is accountable. But if we say one person—take a chief AI officer and make them accountable—that’s also problematic because everybody else can do whatever they want as there is one senior manager who’s going to take the blame,” Kundu says.

Kundu says in his role as CDO at Standard Chartered, he shared responsibility for model risk management with the bank’s head of risk management. From working with banks in his role at Truera, he says, he has seen that the US and the EU distribute accountability differently. US banks typically assign the responsibility to model risk teams, whereas elsewhere in the world, accountability is layered across the data management and model risk team.

Morgan Stanley is one bank that has adopted a federated governance structure across its organization for managing its data. This means that each business or department within the firm has both ownership and accountability for how its data is used, taking the onus away from the bank’s information technology teams.

This is a common framework that can be adapted to governing AI models. During the AIPPF discussions, members discussed that one senior manager should outline the AI policy and standards, and then individual heads of the department would be responsible for implementing them.

“So, the responsibility for ensuring that the recruitment algorithm is compliant is with the head of HR and the responsibility for ensuring the credit model is compliant is with the head of credit risk or the head of lending,” he says.

Scaling policies 

Financial firms have strict policies for managing the risk models they use to calculate risk-weighted assets or liquidity risk, or perform market analysis. These organizations are now using ML models in functions like recruitment, for transcribing meetings, or in client-facing solutions; their existing policies need to be scaled up to meet this wider scope of application, Kundu says.

“It’s not that our existing frameworks are inadequate. It’s just that they are built for a small number of high-stakes models, whereas now we have a very large number of high- and low-stakes models,” he says. “How do we scale up model risk management so that we can have coverage of many more models?”

The AIPPF identified risks arising from AI models in three categories: risk to the consumer, risk to the firm, and systemic risk. One of the key contributors to these risks is the complexity of the models being developed—are they neural networks with many layers and nodes, for instance?—and the challenges of explaining how they work to regulators or clients.

The forum members have also discussed how AI can influence, profile, and target consumers in ways that are unprecedented and technically impossible for a human. Sophisticated ML or natural language processing tools have the compute power to outperform any human analyst and can analyze and learn from huge volumes of data in seconds to minutes compared to days to weeks.

Kundu says the solution is making sure these models are trained to create “fair” and justifiable results.  

“If you’re building models that impact corporate customers or corporate partners, then it’s less concerning. But if you’re doing a recruitment model or a credit model, or anything that touches a human being like you or me, then the whole thing about unfair bias becomes a big issue,” he says.

Human in the loop

In 2017, Melanie Pickett joined Northern Trust, where she is head of front-office solutions, with the mandate to create a new line of business within asset servicing focused on meeting the operational and technology needs of the custodians’ clients. She is responsible for its front-office solutions aimed at endowments, foundations, family offices, and other asset owners and allocators.  

Pickett has leveraged various kinds of AI to improve efficiencies for these clients in alternative investments. Among other use cases, Northern Trust is automating document procurement for alt investments clients, tagging documents with reference data, and using NLP to extract operational data such as net asset value.

Pickett says that for critical AI solutions, such as the NLP solution, it’s important to ensure that a human is always reviewing the outcome.

“This is really critical information on which our clients are basing investment decisions. Good operational principles are having makers and checkers on everything you enter. You can think of the machine learning and the algorithms as being the first input for some of this operational data, but we will always have a human reviewing that information,” she says.

Another crucial way that her team avoids making errors with AI is by operating on an agile, cross-functional basis, she says.

“Our product team, our operations team, our technology teams are all one team. So as we’re designing prototypes, we’re iterating. Certainly, there have been bumps along the road, but we have the full team all in one place and all focused on the same mission and the same problem,” she says.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

SEC squares off with broker-dealers over data analytics usage

The Gensler administration has ruffled feathers in the broker-dealer community with a new proposal seeking to limit their use of predictive data analytics. But at the heart of this deal is something far more seismic: one of the first attempts by the SEC to regulate AI.

The Cusip lawsuit: A love story

With possibly three years before the semblance of a verdict is reached in the ongoing class action lawsuit against Cusip Global Services and its affiliates, Reb wonders what exactly is so captivating about the ordeal.

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here