This article was first published by Law360 here.
The Financial Conduct Authority (FCA) recently launched a call for input on the long-term impact of AI on retail financial services. The review builds on the FCA's existing AI Lab initiatives and its often-expressed view that existing regulatory frameworks strike the right balance between allowing firms the freedom to innovate and compete with their use of AI, while also providing enough 'regulatory bite' to manage any risks arising from its deployment.
The call for input closes on 24 February, and has four interconnecting themes:
- The Future evolution of AI technology – including more powerful, autonomous and agentic systems
- The Future impact of AI on markets and firms – including changes to competition and market structure
- Future consumer trends – including how AI could improve outcomes, create new risks, change behaviours and alter demand and provision of financial services
- Future regulatory approach – including how regulators may need to evolve to continue ensuring retail markets work well.
Theme 4 is the most controversial. While the financial regulators seem currently wedded to a pro-innovation but risk-mindful approach, the Treasury Select Committee says this approach risks serious harm to consumers and the wider financial markets. So what is the best way forward for regulation in this ever-evolving area?
The regulators' position: pro-innovation and opportunities
It is no secret that the UK financial regulators are pro-innovation, and keen to encourage responsible AI use to increase efficiency, competition and customer satisfaction in the markets. The FCA's approach to AI is principles-based and outcomes-focused – to allow firms flexibility in adapting to technological change and market developments, rather than implementing prescriptive AI frameworks like the risk-based approach of the EU's AI Act, or state-level legislation in the United States. The FCA has continually said it does not currently plan to introduce extra regulations for AI, instead relying on existing regulatory frameworks which it says are versatile enough to mitigate the risks associated with AI. AI is also a key pillar of the FCA's 2025-2030 Strategy, and its aim to become a 'smarter regulator'.
The Prudential Regulation Authority's supervisory priorities for 2026 similarly highlighted the increased use of AI tools and the opportunities and novel risks these might pose for its regulated community. The BoE also reiterated its technology-agnostic approach in encouraging firms to safely deploy AI, which alongside distributed ledger technology and quantum computing, it sees as having the greatest potential to shape the economic and financial services industry.
The FCA has gradually added new elements to its AI Lab, an initiative aimed at providing a venue to engage in AI insights and develop new AI models and solutions:
- Supercharged Sandbox – giving firms access to high-performance computing, enriched datasets and advanced tools to develop and test early-stage AI ideas to tackle challenges in retail and wholesale markets. It launched in June 2025, and in January 2026, the FCA hosted a showcase of AI-enabled solutions, focussing on how firms can responsibly develop, test and deploy AI.
- AI Live Testing – providing a safe place for firms to test AI systems in real-world environments, with appropriate regulatory oversight. Applications for the second cohort of testing are now open.
- AI Sprint – events that bring together representatives from industry, academic, regulation technology and consumer groups, and aim to inform the FCA's regulatory approach to AI.
- AI Input Zone – where the FCA calls for input on firms experiences using – or considering use of – AI.
- AI Spotlight – a platform for firms and innovators to showcase real-world applications of AI in financial services.
Robust risk awareness
According to the BoE, the 'technology-agnostic' approach does not mean a 'technology-blind' approach: the regulators will act if the use of AI or similar innovations are likely to have an adverse impact.
Other industry bodies have gone further in highlighting the risks that AI might pose to the financial services industry, with some commentators suggesting that the industry has not fully grasped the risk profile of large-scale AI deployment, and calling for a stricter approach from the regulators.
The Financial Policy Committee (FPC)'s review underlined some opportunities in the area, but also highlighted several risks, including that:
- AI in core decision-making could introduce risks around quality, explainability and predictability of output, so if many firms rely on similar models or data libraries, widespread incorrect estimation of risk could result
- Unrecognised flaws could amplify financial system shocks and potential loss of market confidence
- The opaque nature of some AI-driven decisions could raise difficulties in determining liability, raising the risk of legal challenge and financial redress events
- Increased reliance on a small pool of AI vendors poses considerable concentration risk at a firm level but also in terms of widespread disruption scenarios
- For financial crime, there is a bidirectional 'arms race' whereby AI may enhance both the offensive and defensive capabilities of cyber risk such that the overall impact from a financial stability perspective is uncertain.
The Lending Standards Board published research in mid-2025 that highlighted flaws in AI-powered 'chatbots' used in financial services, which showed a strong customer preference for 'live agent' chats because customers either found the chatbots unhelpful or were left unsure about next steps.
The FCA's own AI research series has highlighted problem areas with certain tools. One pilot study looked into bias in word embeddings (mathematical representations of words that capture their meanings and relationships to other words / phrases), which could have the potential to encode harmful demographic biases and cause tangible harm if deployed in consumer-facing operations. The study found that while there is some ability to measure and mitigate bias, there is no consensus as to the best way to tackle the current limitations.
A later study in the series - on the potential for demographic pricing differences in the mortgages market – concluded that there was no evidence of differences in mortgage pricing across different groups, rather that different groups appeared to have different types of mortgage products. That said, the FCA felt this did not enable it to conclude that there were no issues with product availability to certain customers.
Warning from the Treasury Select Committee
So, after what could be described as a somewhat laid-back approach from the regulators, we had the voice of criticism from the Treasury Select Committee. Overall, their January 2026 Artificial Intelligence in Financial Services report concluded that the regulators and Treasury are not doing enough to manage AI risks, and that their 'wait-and-see' approach exposes consumers and the markets to potentially serious harm.
The BoE had said that supervision had to be about 'dealing with the impact of situations as they occur', and the FCA similarly claimed that the existing Consumer Duty and Senior Managers and Certification Regime together provide 'enough regulatory bite' that it does not need to write AI-specific rules. However, the Committee had received input that flagged:
- AI-driven decision making (particularly in credit and insurance) lacks transparency
- Using AI in financial decision making and product design risks excluding disadvantaged customers
- AI search engines can provide unregulated financial advice, which may mislead or misinform customers
- AI may increase the risk of fraud and cybersecurity vulnerabilities
- UK firms over-depend on a small number of US AI and cloud service providers
- In financial markets, AI-driven trading could amplify herding behaviour, risking a financial crisis in a worst-case scenario.
The report also criticised the Treasury (again) for being slow to use its new powers to designate major AI and cloud service providers under the Critical Third Parties (CTP) regime. A number of providers have already said that they expected to be designated, but Economic Secretary Lucy Rigby only responded that the first designations are 'expected' sometime this year.
The Committee recommended that:
- the FCA must provide greater clarity on the application of existing rules, and by the end of 2026 should publish comprehensive, practical guidance for firms which sets out who within an organisation ought to be accountable for any harms caused through AI (including where third parties are involved in deployment and provision) and how consumer protection rules apply to firms' use of AI;
- the FCA and BoE must conduct AI-specific stress testing
- by the end of 2026, the Treasury must designate the major AI and cloud providers under the CTP regime
- the FPC must monitor the progress of the CTP regime and, if necessary, use its powers to ensure swift implementation.
Industry opinion
Interestingly, and in some places contradictory to the evidence the Treasury Select Committee received, the BoE has published a summary of three recent round tables it has held with banks and insurers, which generally supported an approach of principle- and outcome-based policies with supervisory statements, so that firms have space to innovate within clear regulatory guardrails. Their major concern is that third party providers still fail to understand the compliance requirements of regulated firms, which can slow down negotiations. Internally it is the second-line risk functions that tend to be the most cautious over AI, and respondents were not sure whether this was a good or a bad thing.
Practical takeaways
Clearly, the regulators must get the balance right between allowing firms to innovate and protecting consumers and markets. Putting in place rigid regulation would undoubtedly stifle the former, given the fast moving pace of AI use-cases. The government and regulatory mindset has clearly also moved away from protecting the consumer at all costs, and is trying to pursue the balance of proportionate regulation underpinned by the wide-ranging Consumer Duty obligations to deliver good outcomes.
Lack of regulatory guidance can make firms reluctant to innovate, for fear of going too far. At the moment, it seems as if firms feel that they are getting enough guidance, particularly from the FCA. They would value more regulatory input, but that input would be focussed on the AI providers. That leaves protecting the financial system and the markets, which is arguably harder to do without more prescription. Perhaps for the immediate future the answer lies in focussed and detailed stress testing of several scenarios, with the regulators ready to act quickly if they see real and increasing risks of harm.
This article is for general information only and reflects the position at the date of publication. It does not constitute legal advice.
