Treasury urges regulators to address AI usage in financial services
- Simon Bourke

- Jan 21
- 3 min read
Updated: Jan 26

21st January 2026
The Inquiry
On Tuesday the Treasury Committee published its inquiry into artificial intelligence (AI) usage in financial services. The inquiry was initiated in early February 2025 to delve into the risks presented by the use of AI in the financial services sector.
Adoption outpacing regulation
AI adoption across UK financial services has moved quickly, especially in comparison to other sectors. According to the AI in UK financial services report in 2024 by the Bank of England and the FCA, over 75% of UK financial services firms were using it by November 2024. Despite this hasty uptake, the UK does not have any AI-specific financial regulation, and the FCA and Bank of England currently rely on existing, non-AI-specific policies to regulate AI usage.
Key risks
The Treasury Committee’s report notes that both the FCA’s Chief Data, Information and Intelligence Officer, Jessica Rusu, and the Bank of England’s Director for Central Bank Digital Currency, Tom Mutton, have said that the current framework allows for enough regulation to protect consumers of financial services from harm caused by AI usage.
However, the Committee sets out a number of risks around AI usage in the sector.
These include:
A lack of transparency around how AI is being used to make decisions in areas like credit and insurance
AI-driven decision making “threatening financial exclusion for the most disadvantaged consumers”
Consumers being misinformed by unregulated advice given by AI search engines like ChatGPT
The risk of AI enabling an increase in fraud
Concerns over increased rates of cyber attacks
The sector’s over-reliance on a handful of major US providers for AI and cloud infrastructure
The possibility of AI-driven financial advice encouraging herding behaviour among consumers.
The FCA’s approach and critique
As covered in the report, according to Jessica Rusu, the current regulatory approach includes monitoring AI usage in financial services through surveys, complaints, social media, the supervision hub, and “sources of intelligence that let us know if something that is not okay is happening in a particular firm.” The FCA is also trialling schemes such as its AI live testing service, which launched in April 2025.
However, industry stakeholders told the Committee that the FCA’s reactive approach to regulating AI has left them unsure how to govern their AI usage.
Despite the FCA announcing in June last year that they would “create a joint statutory code of practice” for firms using AI, concerns have continued over a lack of clarity, particularly over how the regulations from the Senior Managers and Certification Regime apply to AI. This has, in turn, led to concerns about the accountability held by firms for any harm caused to consumers through AI usage.
Cyber resilience and third-party concentration risk
In reference to cyber security, the report also flags gaps in resilience testing. Though the FCA and Bank of England do conduct cyber stress testing, they do not currently have an AI-specific stress testing.
The Committee also highlights the sector’s dependence on AI and cloud services from a small number of providers. Parliament brought in the Critical Third Parties Regime in 2023 to address this, but in October 2025 the Economic Secretary to the Treasury (EST), Lucy Rigby KC MP, confirmed that no firms had yet been designated under the regime. When pressed again in November 2025, the EST said the first designations were likely to be made “within the next 12 months.”
Conclusion and call to action
In its conclusion, the report argues that while AI does present opportunities to help consumers, both financial services firms and regulators need to come together to ensure that AI usage does benefit consumers in this sector.
It determines that the FCA, the Bank of England and HM Treasury are not currently doing enough to prevent potential harm caused by AI usage in the sector, and that they will need to clarify the application of current regulatory rules to AI. Specifically, the report calls for clear and practical prescriptive regulation for financial advice firms on acceptable AI usage by the end of 2026.
Considering your next chapter?
At Chapters Capital, we specialise in financial planning and wealth management M&A.
Whether you are considering a sale, merger, or want to learn more about buyers in the space, please contact one of our professional associates today for a confidential, no-obligation consultation.
To stay updated with all the relevant news in one place, sign up for our fortnightly newsletter, The Foreword.




