Europe’s financial industry is at an inflection point in its handling of Artificial Intelligence (AI). While classical Machine Learning models (ML models) have been used in risk modelling for years, Agentic AI – autonomous AI systems that plan, decide and act on their own – represents a qualitatively new category. According to an Accenture survey, 57 per cent of bank executives expect AI agents to be fully embedded in risk, compliance and audit functions within the next three years. The global market volume for AI in the banking sector is estimated at USD 45.6 billion for 2026 – up from USD 26.2 billion in 2024.

But the shift from assistive to autonomously acting systems is not a gradual step; it is a paradigm change. It raises questions that go far beyond technical implementation. Who is liable when an AI agent makes a faulty credit decision? How can a system be supervised whose decision logic adapts dynamically? And how can institutions prevent autonomous agents, interacting with each other, from generating systemic risks?

At a Glance

Technology: Agentic AI – autonomous AI systems with tool use, adaptive strategy and multi-agent coordination.

Market dynamics: 57 % of bank executives expect full integration into risk/compliance functions within three years; banking-sector AI market volume 2026: USD 45.6 billion.

Regulatory cornerstones: EU AI Act (high-risk obligations from 2 August 2026), DORA (applicable since 17 January 2025), BaFin guidance on AI (December 2025).

Governance maturity: Only 32 % of Chief Risk Officers (CROs) classified as ‘Risk Strategists’ – 93 % of AI spend flows into technology, 7 % into governance, people and training.

How Agentic AI Differs from Conventional AI

The term Agentic AI describes AI systems that go beyond pure pattern recognition and prediction. They can autonomously plan and execute complex, multi-step tasks – processing a credit application end-to-end, reviewing regulatory changes for action items, or rebalancing a portfolio in real time. Unlike conventional models that produce an output for a defined input, Agentic AI systems interact with their environment: they query external data sources, trigger downstream processes and adapt their strategy based on intermediate results.

McKinsey describes this transition as a “paradigm shift” in banking operations. The consultancy sees potential for net cost reductions of up to 20 per cent – but only with disciplined execution and appropriate governance. Precisely the autonomy that makes Agentic AI powerful also produces novel risks that established control mechanisms cannot fully address.

Agentic AI – Core Characteristics at a Glance

Autonomy: AI agents plan and execute multi-step tasks independently, without requiring human approval at every step.

Tool use: Agents access external Application Programming Interfaces (APIs), databases and systems to gather information and trigger actions.

Adaptive strategy: Decision logic adjusts dynamically based on intermediate results and feedback.

Multi-agent systems: Specialised agents coordinate on complex processes – for example credit review, compliance check and documentation in parallel.

Boundary: Whereas conventional ML models produce an output for a given input, Agentic AI systems operate in loops with environmental interaction.

Credit Risk: From Scoring Model to Autonomous Credit Decision-Maker

In credit risk management, a shift is emerging that goes well beyond the established use of ML models for creditworthiness assessment. Agentic AI systems can orchestrate the entire credit process: from automated data collection and document review to risk assessment, credit decision and contract generation. Institutions piloting such systems report credit approvals that are 25 to 40 per cent faster and a reduction in manual interventions of up to 80 per cent.

A typical use case: an AI agent receives a credit application, automatically extracts and validates the submitted documents, cross-checks the data with external sources – credit bureau, commercial register, account history – builds a risk profile and takes a decision within defined parameters. For standard cases this happens without human involvement. Only borderline cases or unusual constellations are escalated to a human analyst.

The efficiency gains are significant – but they come at a price. Deloitte identifies, in a recent analysis, six central risk dimensions for Agentic AI in banking that are particularly relevant in the credit domain: “misaligned goals” – agents optimised for speed may systematically assess credit applications too restrictively or too generously. “Opaque decisions” – decision chains running across multiple agents and data sources become hard for auditors to follow. And “runaway agents” – systems that, in unforeseen situations, execute unintended actions such as automatically rejecting an entire application category based on a misinterpreted signal.

“Because AI agents are designed to operate with a degree of autonomy, they can create risks that banks’ existing risk management frameworks do not fully address.” Deloitte Insights – Managing the new wave of risks from AI agents in banking, 2026

Market Risk: Real-Time Analysis with Systemic Dimension

In market risk, Agentic AI opens up the possibility of analysing portfolio risk in real time and – at a higher autonomy level – actively steering it. Agents can aggregate market data from multiple sources, run stress scenarios, update Value-at-Risk (VaR) calculations and, when limits are breached, automatically initiate hedge positions or propose risk reductions.

For trading desks this yields a speed advantage that can be decisive in volatile market phases. Rather than waiting for a daily risk calculation, an Agentic AI system updates the risk position continuously and reacts to deviations in minutes rather than hours.

But this is also where the systemic risk sits. Deloitte warns of the phenomenon of “unbounded execution”: portfolio-optimisation agents that run through thousands of scenarios in loops, consuming resources uncontrollably or – more seriously – autonomously executing trade orders whose aggregate volume is market-moving. When multiple institutions deploy similarly configured agents at the same time, the risk of pro-cyclical amplification emerges: all systems respond to the same signal with the same strategy – a mechanism already familiar from algorithmic trading but reaching a new level through autonomous agents.

Multi-Agent Risks in the Market Context

The interaction of multiple agents creates additional risk layers. Deloitte identifies four weaknesses in multi-agent systems: emergent misbehaviour from unpredictable agent interactions, coordination failure due to unclear boundaries of responsibility, feedback loops that produce deadlocks or endless cycles, and cascading failures where an initial error amplifies across networked agents. In market risk management, each of these weaknesses can cause financial losses that exceed the scope of any single model.

Operational Risk: Automation That Demands Control

In operational risk management, the most immediate value of Agentic AI lies in automating high-volume, rule-based processes. AI agents can monitor transaction flows, detect anomalies, prepare suspicious cases in Anti-Money Laundering (AML) and trigger escalation paths – at a throughput that is not achievable manually.

Oliver Wyman estimates the automation potential in compliance at up to 70 per cent of manual tasks, with a fourfold increase in detection accuracy. Institutions are already deploying agents that compile AML case files, extract and validate onboarding documents, run sanctions checks and escalate cases with full procedural context to human caseworkers.

The operational benefit is clear. Less clear are the new operational risks the technology itself introduces. A misconfigured agent that prioritises suspicious reports incorrectly can cause actual money-laundering cases to be systematically overlooked – with regulatory and criminal consequences. Dependency on external APIs and data sources creates new single points of failure. And the question of how the quality of autonomous decisions can be continuously monitored confronts institutions with operational challenges for which no established standards yet exist.

“Agentic AI transforms compliance teams from reactive case-handlers into strategic advisors – provided that human oversight remains continuous.” Oliver Wyman – Agentic AI Compliance: Reshaping Financial Institutions, February 2026

Compliance Monitoring: Towards Continuous Supervision

In compliance monitoring, perhaps the most fundamental shift is emerging. Traditionally, compliance departments work in cycles: regulatory changes are analysed, policies updated, training delivered, samples taken. Agentic AI enables the move to continuous, near-real-time supervision.

An Agentic AI system can ingest regulatory publications – from the European Banking Authority (EBA) via the Federal Financial Supervisory Authority (Bundesanstalt für Finanzdienstleistungsaufsicht, BaFin) to international standard-setters – automatically, assess them for relevance, identify affected business processes, derive actions and produce the required documentation. The step from reactive to proactive compliance becomes operationally feasible.

This is particularly relevant in the context of the Digital Operational Resilience Act (DORA), applicable since January 2025, and the European Union Artificial Intelligence Act (EU AI Act), whose high-risk-system requirements fully apply from August 2026. Both regimes impose substantial documentation, testing and evidence obligations that will be difficult to handle without AI-assisted automation.

Regulatory Landscape: What Supervisors Expect

European financial supervision has Agentic AI on its radar – even though regulatory frameworks, as usual, lag behind technological progress. Three developments are particularly relevant for German financial institutions.

BaFin Guidance on AI and ICT Risks

In December 2025, BaFin published guidance on managing Information and Communication Technology (ICT) risks associated with the use of AI. The core message is unambiguous: AI systems must be understood, steered and controlled like any other ICT application – with additional consideration of AI-specific risk dimensions. BaFin expects institutions to develop an AI strategy aligned with the overarching risk strategy and the strategy for digital operational resilience. The entire lifecycle of AI systems – from development and testing through ongoing operation to decommissioning – must be documented and monitored.

EU AI Act and High-Risk Classification

From 2 August 2026, AI systems classified as high-risk must meet all requirements of the EU AI Act. For the financial sector this affects creditworthiness assessment, insurance risk assessment and automated fraud detection in particular. The sanctions are material: up to seven per cent of global annual turnover for breaches. A detailed compliance roadmap for banks is available in my EU AI Act article from April 2026.

EBA Activities on AI in the Banking Sector

In November 2025 the EBA published a factsheet on the AI Act’s implications for the European banking and payments sector. The authority notes that no material contradictions exist between the AI Act and existing banking regulation – the AI Act complements rather than replaces the existing framework. For 2026 and 2027 the EBA plans specific activities to support AI Act implementation, including fostering supervisory convergence and cooperation with the European Artificial Intelligence Office.

Regulatory Framework for AI in the Financial Sector – Status Q2 2026

EU AI Act: In force since August 2024. High-risk requirements fully applicable from 2 August 2026. Sanctions up to 7 % of global annual turnover.

DORA: Applicable since 17 January 2025. Obliges financial institutions to comprehensive ICT risk management that explicitly covers AI systems.

BaFin AI guidance: Published December 2025. Sets out supervisory expectations on AI governance, lifecycle management and risk steering.

EBA factsheet on AI Act: November 2025. Confirms complementarity between AI Act and banking regulation. Further guidelines for 2026–2027 announced.

Governance Gap: Why Existing Frameworks Fall Short

The figures are sobering. According to McKinsey, only about one third of organisations reach a maturity level of three or higher – on a five-point scale – in AI strategy, governance and specifically Agentic AI governance. An EY survey of Chief Risk Officers shows that only 32 per cent of firms are classified as ‘Risk Strategists’ – institutions that manage AI risks strategically and proactively. The rest sit in reactive or experimental stages.

The governance gap has structural roots. First, existing Model Risk Management (MRM) frameworks are built for deterministic or statistically grounded models – not for adaptive systems that change their decision logic dynamically. Second, established standards for validating multi-agent systems are missing. Third, there is a substantial talent shortage: 81 per cent of financial institutions report that they do not have enough specialised staff for AI governance.

93 % of AI-related spend flows into technology – only 7 % into people, training, change management and governance. Institutions are accelerating technology adoption without building the organisational preconditions for responsible use at the same pace. Finding from the Deloitte risk analysis, 2026

Recommendations: Five Actions for Financial Institutions

Deploying Agentic AI in risk management is no longer a question of whether, but of how – and how fast. Institutions that want to use the technology strategically without overlooking regulatory or operational risks should prioritise the following five actions.

1. Build an Agentic AI Risk Framework

Extend existing Model Risk Management policies with agent-specific risk dimensions – in particular autonomy levels, tool use, decision chaining and multi-agent interactions. The six risk classes identified by Deloitte provide a suitable taxonomy as a starting point. Do not start from scratch: MaRisk, EBA guidelines and DORA supply the foundation on which the AI-specific requirements can be built.

2. Define a Tiered Autonomy Model

Set a clear autonomy level for each use case – from purely assistive (human decision) through semi-autonomous (human approval for borderline cases) to fully autonomous (predefined parameters, post-hoc audit). The classification must be tied to the risk materiality and the regulatory classification. For high-risk systems under the EU AI Act, the threshold for full autonomy should be set particularly high.

3. Ensure Audit-Ready Transparency

Log all decision paths, data accesses and agent interactions without gaps. BaFin guidance and the EU AI Act require traceable decision processes – a technically demanding but indispensable requirement for autonomous agents. Logging must be automated and retainable for 10 years. Manual log analysis does not scale.

4. Rebalance the Investment Gap

Significantly increase the share of AI-related spend going to governance, specialised staff and training, up from today’s seven per cent. Build or source new roles – AI risk managers, agent auditors, automation specialists. The EY survey shows: ‘Risk Strategists’ are 48 per cent more likely to mitigate unexpected risks. Governance is not overhead; it is a competitive advantage.

5. Link Piloting with Regulatory Dialogue

Do not run Agentic AI pilots in isolation from supervision; engage supervisors early in the regulatory dialogue. With the EU AI Act’s high-risk requirements from August 2026, a proactive dialogue with BaFin and EBA is not optional – it is sound business practice. Institutions that seek supervisory conversation reduce the risk of after-the-fact interventions.

Timeline: From AI Act to Supervisory Practice

The following roadmap shows the regulatory cadence and the critical path for Agentic AI governance in the European financial sector.

August 2024
EU AI Act enters into force
Publication in the EU Official Journal. Staggered applicability over 24 months.
17 January 2025
DORA becomes applicable
Financial institutions must demonstrate comprehensive ICT risk management – including AI systems.
November 2025
EBA factsheet on AI Act and banking sector
Confirmation of complementarity. Announcement of further supervisory activities for 2026–2027.
December 2025
BaFin guidance on AI and ICT risks
Supervisory expectations concretised: AI strategy, lifecycle management, governance at board level.
Q1–Q2 2026
European Commission: high-risk classification guidelines
Clarification of which AI use cases in the financial sector are considered high-risk. Expected by May 2026.
2 August 2026 – Critical Deadline
EU AI Act: High-risk requirements fully applicable
Credit scoring, risk assessment and automated fraud detection must meet all regulatory requirements. Sanctions up to 7 % of global annual turnover.
2026–2027
EBA: Supervisory convergence and AI Office cooperation
Development of common supervisory approaches for AI in banking. Cooperation with the European AI Office to support implementation.
Conclusion

Agentic AI is neither hype nor niche in risk management. The technology shifts the boundary of what can be automated – and at the same time forces governance to evolve. Institutions that actively shape the shift use their existing MaRisk, EBA and DORA structures as the foundation and extend them with agent-specific controls. Those who wait for regulatory clarity lose not only operational ground but also risk being unprepared when the high-risk obligations are enforced in August 2026. The question is not whether banks will deploy Agentic AI – it is how fast they lift their control systems to the new autonomy level.

Christian Schablitzki

Christian Schablitzki

Strategy & Management Consultant · Agentic AI expert for financial institutions

More than 20 years in investment banking and derivatives trading, followed by more than 10 years advising financial institutions. Currently Partner at Infosys Consulting in Germany. Certified in Google AI, Generative AI Leader (Google Cloud) and IBM RAG and Agentic AI.

LinkedIn profile →
newsletter
the agentic banker

Read on – in your inbox every fortnight.

Capital markets insights, regulatory updates and AI trends. Concise, substantive, free.

GDPR-compliant. Cancel anytime.

← Back to overview