From Digital Transformation to Agentic AI Governance
News and information from the Advent IM team.
A governance and operational resilience view for financial services
The rise of agentic AI in finance operations
Financial services has spent years industrialising decisions. Credit, underwriting, fraud triage, affordability checks, complaints handling, KYC (Know Your Customer) refresh, collections, and customer service routing already run on workflows, rules, and scoring. What has changed is that AI is no longer confined to narrow, well-boxed tasks. Supervisory work indicates AI is now widely deployed across UK financial services and is expected to expand further as firms add use cases and increase the criticality of existing ones.
Agentic AI accelerates this shift because it is designed to progress work, not just generate text. In operations terms, an agent can assemble evidence, query internal systems, apply policy logic, request missing information, draft customer communications, and then hand off for approval where required. In lending and underwriting, the early pattern is often automation around the decision rather than fully autonomous lending. The agent pulls the case together, flags exceptions, and routes work to humans when judgement or policy exceptions are involved.
The attraction is obvious: reduced cycle time, higher throughput, and better use of scarce underwriting expertise. The risk is just as obvious once you name it: if the system is wrong, it can be wrong efficiently and repeatedly. Credit and underwriting outcomes are not just operational metrics; they shape customer lives, portfolio quality, and regulatory attention. Industry reporting and supervisory interest reflect this, with firms exploring agentic trials and regulators watching closely because autonomy and speed alter the risk profile.
Why explainability, trust, and accountability will separate winners from liabilities
In financial services, explainability is often treated as a technical property of a model. In reality, the bar is wider. For credit and underwriting, firms need explanations that work at three layers at once: what the customer can be told clearly and fairly, what the firm can justify internally to risk and audit, and what can stand up externally in supervisory engagement and dispute resolution.
Agentic AI makes this harder because it creates decision journeys rather than single decision points. An outcome may be shaped by tool calls, data retrieval, intermediate summaries, hand-offs, and exception logic. If controls and logs were designed for a single model artefact, firms can find themselves unable to reconstruct the path that produced an outcome. That is where liability lives: complaints that cannot be answered properly, adverse outcomes that cannot be isolated quickly, and incidents that cannot be investigated fast enough to stop repeat harm.
Trust is not a slogan or buzzword in finance; it is an operating requirement. The Consumer Duty has entrenched expectations around delivering good outcomes, which means experimentation cannot be allowed to create avoidable harm in live customer journeys. Agentic AI can support better outcomes when it reduces processing errors, improves consistency, and surfaces vulnerability indicators earlier. It can also damage outcomes if it over-automates edge cases, introduces new bias patterns, or nudges customers into choices they did not intend and cannot easily undo.
Accountability is the sharpest separator. UK financial services already has a mature accountability posture through the Senior Managers and Certification Regime (SM&CR). Agentic AI does not dilute that; it stress-tests it. If an agent materially shapes decisions or customer communications, firms need to show clear ownership, clear control boundaries, and demonstrable oversight. The firms that do this well will scale agentic capability with confidence. The firms that do it poorly will discover, mid-incident, that they cannot evidence who owned the risk or why the system behaved as it did.
Regulatory readiness: how frameworks must evolve to support intelligent systems
The regulatory landscape is now a timetable rather than a theory. For firms with EU exposure, the Digital Operational Resilience Act has been applicable since 17 January 2025 and sets expectations for ICT risk management, resilience testing, incident handling, and oversight of third-party dependencies. Agentic AI is directly relevant because it is dependency-rich by nature. It typically relies on cloud platforms, external model services, orchestration layers, monitoring tooling, and data pipelines. Resilience therefore includes safe degradation, rapid containment, and recovery that preserves evidence.
AI-specific regulation is also moving quickly. The EU AI Act entered into force in 2024 and is being applied in stages, with early obligations beginning in 2025 and broader applicability in 2026, alongside longer transition periods for some requirements. Even UK-based firms can feel its pull through group structures, services offered into the EU, and suppliers that will align to EU obligations as a baseline.
In the UK, regulators are leaning towards practical evaluation alongside rising expectations on evidence. The FCA’s AI Live Testing initiative is intended to support firms that are ready to deploy consumer- or market-facing AI by exploring how systems can be tested and evaluated before wide rollout. This approach is pragmatic, but it also raises the evidential standard. Firms are increasingly expected to demonstrate how they tested for customer impact, model behaviour, and operational control, not merely that a policy exists.
Frameworks also need to evolve beyond traditional model risk management. The PRA’s (Prudential Regulation Authority) expectations for model risk management position it as a discipline that requires board-level attention and proportionate governance. Agentic systems push that discipline up the stack. You are not just validating a model; you are governing an operational system made of models, prompts, policies, tools, workflows, human approvals, and third-party services. The control questions expand accordingly: who authorised what the agent is allowed to do, how permissions are constrained, how escalations work, how drift is detected, how you stop it safely, and how you prove afterwards what happened and why.
Agentic AI can be a genuine competitive advantage in credit and underwriting, but it will not reward the firms with the flashiest demonstrations. It will reward the firms that can scale autonomy while keeping outcomes defensible. That means designing explainability as a system property, treating trust as something you measure and monitor, and embedding accountability into the operating model so that when the agent acts, the firm can still show clear human ownership, clear boundaries, and clear evidence that the system stayed inside the lines, even on a bad day.
Ellie Hurst, Commercial Director
_____________________________________________________________________________________