AI adoption in financial services is accelerating rapidly, and firms are moving well beyond experimentation. At a recent Digital Regulation Cooperation Forum (DRCF) roundtable, industry participants and the Financial Conduct Authority (FCA) discussed how AI is being deployed in practice, where risks are emerging, and how firms should navigate innovation within the existing regulatory framework.
The discussion surfaced a clear message: firms should not wait for bespoke AI regulation, but they do need to apply existing regulatory principles thoughtfully as AI becomes embedded in core activities. What follow is a summary of the points discussed; it doesn’t necessarily reflect the views of Simmons & Simmons.
From efficiency gains to structural change
Early AI use cases focused on chatbots and isolated automation. The most significant recent shift has been the use of generative AI in software development, dramatically accelerating coding and enabling nontechnical teams to contribute more directly to product design.
This creates a genuine opportunity to rethink operating models, not just optimise individual processes. While fintechs retain structural advantages from being cloudnative and datacentric, incumbents are closing the gap by combining scale, deep data sets and improved engineering capability with emerging AI tools, including early agentic systems.
The result is a more level competitive landscape, where governance and execution are becoming as important as speed of innovation.
The FCA’s message: existing rules already matter
The FCA’s approach to AI is deliberately pragmatic. The regulator sees widespread AI use today in backoffice efficiency, customer service and complaints handling, with particular interest in hightouch intermediary areas such as wealth management, broking and advicebased services.
Importantly, the FCA does not view AI as requiring a separate regulatory regime. Existing financial services regulation is seen as a helpful, principlesbased framework for AI deployment, in the same way it governs firms’ other activities. The focus remains on fairness, governance and good consumer outcomes, rather than on prescribing how AI systems must be built.
For many efficiency driven use cases, firms can proceed under the existing framework without additional regulatory engagement. Where applications are novel, complex or higher risk, particularly in advice contexts, early engagement with the FCA is encouraged.
Data, agentic AI and control
As firms experiment with more advanced and agentic AI systems, data quality and governance remain fundamental. While modern AI tools can access and work across dispersed data sources without full centralisation, they do not remove obligations around data quality, security or accountability. Poor data will still lead to poor outcomes.
From an engineering perspective, firms should avoid building systems they cannot understand or explain. Oversight mechanisms, testing, monitoring and traceability need to be embedded from the outset, particularly as systems become more autonomous.
The risk of waiting
A recurring theme from the discussion was the risk of moving too slowly. If regulated firms delay deployment in pursuit of perfection, they risk leaving consumers exposed to tools with limited accountability or protection outside of the regulatory perimeter.
In this context, financial services regulation provides an anchor rather than a barrier. Its principlesbased requirements offer a structure within which firms can innovate responsibly, rather than a reason to pause.
A staged path to advice
One practical way forward is a staged approach. There are significant opportunities to deploy AI around the advice process, such as factfinding, analysis, scenario modelling and postadvice support, without the AI itself providing regulated advice. These lowerrisk use cases allow firms to build experience, controls and regulatory confidence incrementally.
Key takeaways for firms
- Do not wait for AI specific regulation. Existing FCA principles already provide a workable framework for AI deployment.
- Focus on outcomes, not labels. Regulatory scrutiny will centre on fairness, governance and consumer outcomes, not whether a system is described as generative or agentic.
- Start around advice, not with advice. Adviceadjacent use cases offer a lowerrisk path to value.
- Data and governance remain foundational. AI does not remove obligations around data quality, security or accountability.
- Embed control by design. Control, oversight and traceability are essential as AI becomes more autonomous.








_11zon.jpg?crop=300,495&format=webply&auto=webp)











_11zon_(1).jpg?crop=300,495&format=webply&auto=webp)