Date of meeting: 1 October 2021
Minutes
As stated in the AIPPF Terms of Reference, the views expressed by the AIPPF members in these minutes and all other outputs do not reflect the views of their institutions, the Bank or FCA. Also, the activities, discussions, and outputs of the AIPPF should not be taken as an indication of future policy by the Bank or FCA.
Item 1: Opening remarks by co-chairs
Co-chairs Dave Ramsden and Jessica Rusu welcomed the members and observers to the fourth meeting of the Artificial Intelligence Public-Private Forum (AIPPF), which focused on governance.
Dave Ramsden
Dave started by thanking the members and observers for contributing their time, expertise and knowledge to the AIPPF. On the topic of governance, Dave noted that this had come up during the previous AIPPF meetings on data, which shows how crucial it is to the safe adoption of artificial intelligence (AI) in UK financial services.
One of the key questions is what makes AI different from other new and emerging technologies from a governance perspective? Dave suggested it is the incremental capacity for autonomous decision-making, which means AI can limit or even potentially eliminate human judgement and oversight from key decisions.
Clearly this poses challenges to existing governance frameworks in financial services and the concepts of individual and collective accountability that are enshrined in those. Dave explained how these challenges have implications for the collective composition and effectiveness of firms’ boards; enterprise wide-risk management and the three lines of defence model; and individual accountability. All of which are crucial elements of the Senior Managers and Certification Regime (SM&CR).
Lastly, Dave highlighted the shortage of technology expertise (including AI) in most firms’ boards, both individually and collectively, which can lead to a skills and engagement gap. He explained how this gap drives many of the aforementioned challenges and is an issue for all sectors, not just financial services.
Jessica Rusu
Jessica reiterated her thanks to the AIPPF members for their contributions. She then explained how there are many different layers to the term ‘governance’ in financial services, especially for the FCA as a conduct regulator. This includes the SM&CR, which applies to most financial services firms, but also wider issues like sustainability, environmental questions and the ethics of AI.
Jessica gave some examples of current FCA work in this space to demonstrate how the wider concept of governance is increasingly important in the regulation of financial markets. These included the current consultation on environmental, social and governance (ESG) issues in capital markets, proposals to boost disclosure of diversity on listed company boards and executive committees and, jointly with the PRA and the Bank of England, a discussion paper on how to accelerate the pace of meaningful change on diversity and inclusion in financial markets.
Artificial Intelligence, Jessica noted, may fall into the category of new issues where governance is a key regulatory principle. She also noted that whilst there is evidence to demonstrate how governance frameworks can contribute to good outcomes for consumers and markets, there are questions around how to measure and track outcomes. Building on existing best practice, including from industry, is key to making governance work, she observed. Therefore, Jessica said it was important that regulators learn from industry practice and current governance approaches to AI and jointly explore how governance can contribute to ethical, safe, robust and resilient use of AI in financial services.
Item 2: Roundtable discussion
The aim of the roundtable discussion was to identify and discuss the key issues of the following four topic areas:
- Governance structures
- Roles and responsibilities
- Transparency and communication
- Standards, auditing and regulatory framework
Governance structures
Key challenges
How can firms create governance structures that support innovation whilst delivering ethical, safe, robust and resilient AI applications?
- Automate as much of the evidence gathering process as possible and embed that through the build chain (testing) and running services (monitoring), which will greatly speed up governance overview and approval.
- Clarify the triggers for the manual processes recognising they scale poorly.
How should current governance frameworks be adapted to accommodate AI use across the firm?
- Data governance frameworks need to consider how AI will be used alongside data and understand the impact when approving datasets.
- Cloud governance frameworks need to understand AI tools used on the cloud platforms and whether they are built in-house or provided by third parties as part of the cloud services.
What is an appropriate balance between centralised and federated governance structures when it comes to the use of AI?
- Central functions for defining and enforcing standards, as well as monitoring AI systems at the second line of defence.
- Federated implementation of the standards at the first line of defence.
How could firms, regulators and industry assess the performance of governance frameworks in the context of AI? How, if at all, does this differ from regulators’ supervision of firms’ governance in general?
- Evidence of the controls in place for the first and second lines of defence, as well as the process used to implement the controls.
- Regulators and industry could engage directly with the interfaces exposed by firms, executing requests to actively explore governance’s efficacy. This would be impact focused rather than process-focused
Discussion
Members agreed it is beneficial to automate the documentation generation or information/evidence collection aspects of the governance process. This allows the information to be embedded into the development lifecycle (and thereby the governance processes) rather than being additional activities to engage in at a later stage. This in turn helps with innovation and rapid development of AI systems because there is no need to engage in a long, slow waterfall-style approach.
Often controls around AI models and systems are semi-automated, rather than fully automated. For example, controls around bias analysis can be automated to identify different forms of bias in the model and outputs during the development stage (e.g. use of protective characteristics). However, it is then useful to have a human-in-the-loop so they can review the alerts and investigate further before the model is put into production.
Members also agreed that governance should be tiered and aligned with the risk and materiality of the use case, even when the same overarching governance framework is applied to all AI systems. For example, high-risk and high-impact AI models will require more due diligence. Also, several members agreed that the rapid pace of development in AI means that frameworks (including standards and principles) need to be fluid and should be reviewed regularly (and refreshed when necessary). That may require two levels of governance: (a) strategy-level that develops standards for the entire firm and (b) execution-level that applies the standards on a use case by use-case basis.
One member said that governance frameworks should facilitate a safe environment for testing AI models. One of the factors that inhibits or slows innovation (across all industries) is imbalance in the risk-reward incentive structure, which can heavily penalise mistakes and can means that the cost of failure is high. This is particularly true for sensitive datasets, which can provide some of the biggest benefits if used correctly and ethically, but carry the highest costs if something goes wrong (including operational, business, and compliance costs).
Members noted that, where possible, firms should leverage and adapt existing governance frameworks to deal with AI. The most relevant in financial services are data governance and model risk management frameworks, as well as operational risk management. AI ethics is one of the areas or issues that may require a new framework. Some ethical challenges are novel. Also, there is currently no single framework that helps in thinking through ethical challenges and addressing them effectively. Members also noted that existing skill sets often do not include experience of working with ethical issues. Members agreed that AI governance in general tends to require a broader set of skills, experience and backgrounds, e.g. ethics, people management, HR, rather than a narrow focus on technology, risk management and compliance.
One member noted that credit risk regulation in the United States, specifically the right to explanation, offers a useful starting point for firms’ governance of AI models. The requirements are not perfect, since they do not cover all relevant aspects for AI models (e.g. they are silent on fairness), but are a helpful starting point to think these issues though.
Roles and responsibilities
Key challenges
Are there clear lines of responsibility and accountability within firms when it comes to the design, development, deployment and monitoring of AI?
- It depends on the degree of AI maturity and use at the firm. Also, unlikely for a single, common approach to be suitable for all firms.
- A key area of uncertainty is the split in accountability and responsibility between AI model ‘developers’ (e.g. model build team) and ‘implementers’ (e.g. business leads). The lack of technical skills within the business areas can pose an additional challenge.
- For those with more mature use of AI, there tend to be clear lines of responsibility and accountability for the developers (e.g. model build team), compliance team (risk, audit, etc.) and business leaders who are ultimate decision makers.
Who should be ultimately responsible for AI within a firm (including under the SM&CR) and should this be a single individual or shared?
- There should always be responsibility at the Board level and down through senior managers into the rest of the firm.
- However, unclear if there should be a single senior manager that is responsible for AI (e.g. Chief AI Officer)
- This may depend on the level of AI maturity in the firm and the relevant skillset.
How can firms and their senior managers comply (and demonstrate compliance) with their responsibilities when they rely on third party service providers for one or more aspects of their AI use cases?
- Contractual relationships can set in place obligations for responsible practice on both sides. These should also include regular monitoring for performance and outcomes impact, both internally and with the third party provider (if necessary).
- Firms also need to ensure they have expertise and knowledge to be an intelligent client.
- As with previous areas, division of responsibility and liability between developers and implementers is complex and needs addressing.
What does the notion of “reasonable steps” mean when dealing with AI and how can firms demonstrate they are taking such steps when providing human oversight of AI models?
- Reasonable steps could include: having ethics framework and training in place; maintaining documentation and ensuring auditability; embedding appropriate risk management and control frameworks; a culture of responsibility (ethics, governance, inclusion, diversity, and training); clear lines of sight, reporting and accountability between AI teams; and more.
- Human oversight depends on the type of model (supervised vs unsupervised) but in practice adequate oversight requires the model implementer to ensure several things.
- These include: sufficient understanding of the model; clear parameters for performance and outcomes, including the ability to measure and monitor against these; and adequate training to identify model drift or erroneous outcomes and over-rule decisions where appropriate.
Is there an argument for giving some AI systems legal personalityfootnote [1] and subjecting them to some form of regulatory vetting and supervision?
- The current generation of AI systems are not advanced enough or do not exhibit truly human-like agency for this to happen.
- Liability and accountability should sit with the individuals and organisation that create the AI model.
- Giving AI systems legal personality may also undermine efforts to ensure human responsibility and accountability.
Discussion
Members agreed there should be a centralised body within a firm that sets standards for AI governance on an ongoing basis. However, there was disagreement if this should be the responsibility of one senior manager (e.g. Chief AI Officer) or shared between several senior managers (e.g. Chief Data Officer and Head of Model Risk Management). Even if it was one senior manager, members did not agree on the need for a Chief AI Officer, since the governance standards may be wider and technology agnostic (e.g. customer transparency standards need to be applied to all automated decisions, irrespective of the use of AI).
Members agreed that given the complexity of AI models and processes, it would more effective to locate responsibility for AI models at different levels of the firm in a federated manner. Members said the business areas that use AI models should be accountable and responsible for the outputs, and subsequently, adherence and execution against the governance standards. At the senior management-level, that means the accountable executive for the business area should be responsible for the decisions made using AI models and adherence to the governance process (e.g. Chief Investment Officer or Head of Credit).
There was agreement that all individuals need the appropriate skillset and level of understanding to challenge decisions and make informed choices, including about the potential risks. Therefore, a centralised committee could play a role in providing education, training and the relevant information to business area executives (including on ethical issues). One of the members suggested mandatory training on AI for everyone in a firm (similar to existing training on data).
For things like electronic communications surveillance and cloud governance, one member said the ownership of models sits within business areas at different levels (e.g. business user, IT owner, model owner). It is then important to have a second line of defence that is able to flag when things are becoming more risky or when performance issues arise (e.g. bias, drift, etc.). However, it is still unclear if those second line functions should sit within business area or separately as a control function. A way forward for firms may be to embed second line teams into the relevant business areas and a third line of defence at the enterprise level (e.g. compliance), so the governance would flow down from top down to the individual business areas owning the models.
One member suggested that top-down and bottom-up governance approaches are not mutually exclusive and can work well for AI. For example, a top-down approach can include a number of executive committees that focus on different aspects, such as: the efficacy of the model, the performance and outcomes of the model, legality and regulatory compliance of the model, and the ethical considerations of the model. At the same time, firms can have a bottom-up approach for people involved at every stage of the design chain and AI lifecycle to sign-off on their respective aspects. That works like a conga line to ensure each team is held accountable for the design of the product. For example, the data team are responsible for outlier detection within the inputs into the model; the database team are responsible for the storage of the data, and so on.
Several of the members supported a production line approach for bottom-up AI governance. This is because lots of the issues and problems that firms are worried about are due to very small, intricate and low-level decisions made by model build teams. For example, what data to include in a training process or what to include in the model. This approach can provide the governance mechanism to link those small decisions up to the senior managers.
Another member drew attention to the challenge of monitoring and auditing third party models. One approach is to divest responsibility from vendor to client via contractual agreements. Members agreed this occurs in most cases and happens because vendors are unlikely to accept liability for clients’ use of their models. Moreover, many clients train third party models on their own proprietary data, which can alter the model slightly, impacting liability.
However, one member said there may be ethical concerns with this approach, in particular from civil society which wants technology providers to assume greater responsibility for their algorithms. Similarly, it was observed that the draft European Commission regulation on AI holds both the vendor and client accountable for credit scoring AI models.
Overall, members said the first approach (where the responsibility sits with the client, rather than the vendor) was the most common. However, the vendors still need to go through an accountability and review process to ensure they meet the relevant regulations. Practically, members noted, this can mean vendors have to complete much more detailed model governance for financial services firms.
Transparency and communication
Key challenges
What type of transparency is needed when AI models are used? Should the type and level of transparency be proportionate to the risks associated with particular AI applications?
- There are two distinct audiences for transparency – 1) model developers, compliance teams and regulators, and 2) consumers that are impacted by model decisions.
- For the first group, firms should be able to provide accurate information about the decision flows taken and the assurance that recommendations / decisions are reliable. This may involve the use of explainability techniques like SHAP values and LIME plots.
- For the second group, transparency is less important compared to communication of what led to the decision that is specific to them. Although consumers should be made aware when a model is being used to automate decisions.
Are there best practices we could learn from (including from other sectors)?
- Transparency could borrow from best practice in data management and software engineering (e.g. lineage tracking, version control, data and model cards through to real time monitoring).
- Firms should learn from higher maturity organisations who already have a standard model governance documentation and framework.
- General practice for consumers is still evolving but standardised approaches and a responsive attitude may lead to better consumer outcomes.
How should firms, industry and regulators address the skills gap?
- The wider industry and regulators should recognise the following skills gaps and work together to put in place greater education, training and discussion: communication (the ability to bridge the gap between models and the people impacted by these model decisions); expertise in pragmatic application of data science and AI; more general software engineering and data skills.
Discussion
Members acknowledged the importance of explainability yet pointed to the fact that there is little in terms of research or viable products that could address this issue. Members noted that AI explainability is not just about the AI model but also about communicating to consumers in an accessible, meaningful and comprehensive way. This was identified as an area where industry stands to gain a lot from best practice.
With respect to internal explainability, no particular technical standard has proved superior to another and solutions are very context-dependent.
One member said that the focus should be on the customer experience. When looked at in this way, explainability, while still important, becomes part of a much broader requirement on the firms to firms to communicate decisions in such a way that they are meaningful as well as actionable. From this perspective, the focus is not just on model features and important parameters, but also consumer engagement and agency.
Some members noted that, while there may be lots of design principles and guidelines within firms’ governance procedures, these are not usually communicated to end-users. It may be valuable to explore how communicating internal principles could be a useful adjunct to model explainability.
With respect to use of third-party models, firms must have good reasons to deploy them, in particular if they raise novel and challenging ethical issues. For example, there are third-party solutions that claim to detect and classify emotions based on voice data. If firms were to take decisions based on input from these models, the transparency would need to be very exact. Firms cannot simply act on the outputs saying that ‘this is what the model says’. Members noted that emotion-detecting models pose very difficult ethical questions as well as not being able to be monitored and evaluated. Another such example is use of AI models in HR to assess and classify candidates/employees based on behavioural data.
Another member noted that while we’re discussing the need for a high degree of explainability for AI models, we’re not having the same discussion about explaining human decisions, which themselves likely involve some unconscious biases.
Auditability is another important element of transparency alongside explainability. Members argued that there is a need to work towards some form of certification and auditing in order to support the development of trustworthy AI. This could involve auditing by a third party or a regulator.
Members noted the current debate in the US on whether transparency necessitates the disclosure of the source code. Source code itself is not usually useful from a transparency or explainability perspective, and especially not to consumers, members observed.
Another issue for companies is the overlap between the need for transparency and the need to protect Intellectual Property and trade secrets.
Resilience of internal systems is a further consideration when thinking about transparency. AI systems can be quite complex and some elements of the systems and processes around it, but not necessarily the AI component itself, may be compromised and open to cyberattack if too much detail of the inner workings is disclosed. A particular case is the use of AI in financial crime and Anti-Money Laundering. Revealing too much detail of how those models work would clearly open them up to potential attacks rendering them ineffective.
Such resilience issues when played out in the credit lending space, for example, may lead to systemic risks.
Members observed, that practitioners need to be increasingly mindful of adversarial attacks, sometimes based or refined using public disclosures. Many AI models are not very stable, for example some face recognition models can be fooled by changing a few carefully chosen pixels, and these can be attacked relatively easily.
Members stated that there is an opportunity to simplify metrics and approaches. For an accountable executive looking after a suite of models and faced with a range of metrics, interpretability becomes very difficult. As an industry, it would be useful to pick a small set of representative metrics.
Another member said that while there are no industry standards on metrics and, more broadly, transparency and explainability, there is an understanding of what relevant risk frameworks should look like. And this concept could be extended to encompass AI.
While there is a lot of information produced on model risk by different financial institutions, there are large variations in what is reported and how it’s reported. This is because the questions being asked come from many different unrelated business areas. It would be useful to have a more homogenous set of questions, a more homogenous set of standards or guidelines around model risk.
Standards, auditing and regulatory framework
Key challenges
What role should the regulator play in prompting/incentivising the right governance structure?
- There are different views about the exact role of regulators in promoting innovation.
- One approach could have regulators explicitly state to industry participants that their goal is to foster a culture of innovation whilst ensuring transparency.
- A different approach could leave firms to incentivise a culture of innovation and, instead, regulators could focus on ensuring that firms can provide transparency into the use of AI.
Should existing regulatory requirements and expectations on governance be adapted or enhanced to meet some of the challenges raised by AI? If so, which areas should regulators focus on?
- The overarching principle should be to ensure that companies can provide transparency into AI algorithms as and when needed, and for firms to provide a remediation plan when an AI algorithm does not behave as expected.
Should there be regulatory requirements to measure outcomes and monitor the impact of AI on the firm, on consumers and on markets? What metrics would best capture outcomes and how should they be used?
- No, firms have a natural incentive to use AI to automate processes and reduce costs. Competition and innovation should reduce costs for consumers and increase product choice.
Is there a role for additional regulatory standards? What would they be?
- The regulator could ask companies to develop policies that outline how they consider the ethics of AI.
Should there be a certified auditing regime for AI? What should it certify?
- Certification could be used as a mark of recognition to identify those firms that have developed AI policies.
- Auditing of policies would extend to how firms plan to remediate errors in the case of false positives, how they have considered the ethics of AI.
Discussion
One example of best practice could be to ask firms to explain remediation policies for the outcomes of AI models.
Members observed that, when it comes to regulation, it is how AI impacts decision-making that is of interest, not the technology per se. Also, there are areas where regulatory expectations are clear, as in data protection. The ICO has published guidance in this area. Similarly, model risk management provides a framework through which AI issues can be approached and mitigated. What is lacking, however, is an overarching framework that brings these more disparate elements together and permits a more holistic understanding of the risks and benefits of the technology.
In terms of the need for new regulation, the US OCC recently provided a handbook for its examiners, rather than changing the model risk management framework (SR11-7). For example, it highlights areas where examiners should pay particular attention including AI. This could be one way forward for the UK.
The EC approach of identifying high-risk cases would also be useful in focussing attention where needed.
Members observed that a key area for clarification is algorithmic fairness. While there is a requirement to treat customers fairly, it’s not always clear what ‘fairness’ means in the context of AI. It may be useful for regulators to provide guidelines on, for example, which metrics and thresholds to consider. The difficulty is understanding what is acceptable and what is not. This links to the asymmetry of benefits vs risks which can act as a barrier to innovation, i.e. models may show some incremental benefits but carry large and uncertain risks.
Another member pointed to the recent paper on algorithmic fairness by FTC Commissioner Rebecca Kelly Slaughter. The paper focussed on three areas: accountability, transparency, and fairness and suggests which existing FTC regulatory tools may be relevant.
There is also a question of the level at which metrics and thresholds could apply. One member suggested that it may be more useful to provide and apply guidance at a specific use-case level rather than at a firm aggregate level where it would be difficult to construct and interpret.
Another member said that there is a role for regulators in providing greater clarity on the types of outcomes they expect around AI governance and controls. However, enforcing metrics around outcomes may prove to be challenging in practice.
There is also the question of who should be setting the principles and doing the monitoring and this is not necessarily something for the regulator but could sit within the firm.
One member noted that while we are discussing AI specifically, many of the issues related to non-AI models and processes and firms should already have much of the monitoring and risk management in place. The main thing to monitor is the outcomes: for example, whether a facial recognition model is good enough in its outputs. This is not about whether the model is accurate or not, but about how the outputs are being used, whether clients are being treated fairly, whether the firm is thinking about negative impacts.
Another member agreed that having a set of principles is necessary and that the difficulty lay in operationalising those principles, including ensuring that they are effective at addressing the underlying challenges. It is the responsibility of firms to put the infrastructure and processes in place and to be able to evidence that the principles are adhered to.
Item 3: Closing remarks and next steps from the moderator and co-chairs
As moderator, Varun Paul concluded the discussion by thanking the members for their time and sharing their expertise. As the last quarterly AIPPF meeting dedicated to specific topic areas, Varun then explained some of the next steps. This included the forthcoming workshops, which would focus on governance-related topics and take place in Q4, as well as plans for the final report that will be published on conclusion of the AIPPF.
Varun also noted that the Bank and FCA will be thinking about what future engagement with the financial industry more broadly could look like in light of the lessons learned through the AIPPF. This includes how to take forward the numerous findings and recommendations that have come out of the AIPPF and will be included in the final report.
Dave and Jessica added their thanks to all the members for their hugely insightful contributions over the past year.
Attendees
Co-Chair | Organisation |
---|---|
Ramsden, Dave | Bank of England |
Rusu, Jessica | Financial Conduct Authority |
Moderator | Organisation |
Paul, Varun | Bank of England |
Member | Organisation |
Barto, Jason | Amazon Web Services |
Browne, Fiona | Datactics |
Campos-Zabala, Javier | Experian |
Christensen, Hugh | Amazon Web Services |
Dewar, Michael | Mastercard |
Gadd, Sarah | Credit Suisse |
Kellett, Dan | Capital One UK |
Kirkham, Rachel | Mindbridge AI |
Kundu, Shameek | Truera |
Lennard, Jessica | Visa |
Moniz, Andy | Acadian Asset Management |
Morrison, Gwilym | Royal London |
Rees, Harriet | Starling Bank |
Rosenshine, Kate | Microsoft UK |
Sandhu, Jas | Royal Bank of Canada |
Shi-Nash, Amy | National Australian Bank |
Tetlow, Phil | IBM UK |
Treleaven, Philip | University College London |
Observer | Organisation |
Ahamat, Ghazi | Centre for Data Ethics and Innovation |
Dipple-Johnstone, James | Information Commissioner’s Office |
Meehan, Rachel | Office for Artificial Intelligence |
Yallop, Mark | FICC Markets Standards Board |
Apologies | |
Dorobantu, Cosmina | Alan Turing Institute |
Mountford, Laura | HM Treasury |
-
Legal personality means the entity (person, corporation, thing, etc.) is the subject of legal rights and duties, such as enter into contracts, sue and be sued, own property, and so on.