On 2 August 2026, the EU AI Act becomes fully enforceable for high-risk AI systems. For banks, insurers and asset managers, this means credit scoring, algorithmic credit decisions and AI-driven risk assessments must comply with strict documentation, transparency, data quality and human oversight requirements by that date. Fewer than five months remain. This article provides a practice-oriented roadmap – from AI inventory to conformity assessment and integration into existing governance structures.
Deadline: 2 August 2026 – full enforceability of high-risk obligations (Annex III)
Affected: Credit scoring, creditworthiness assessment, insurance pricing, AML screening, algorithmic trading decisions
Penalties: Up to EUR 35m / 7% (prohibited practices), up to EUR 15m / 3% (high-risk breaches)
Supervision DE: Federal Network Agency (BNetzA) as central market surveillance, BaFin for financial sector
Legal basis: Regulation (EU) 2024/1689, KI-MIG (German cabinet decision 11.02.2026)
Where We Stand: 20 Months After Entry into Force
The EU AI Act entered into force on 1 August 2024. Since then, prohibited AI practices (social scoring, manipulative systems) have been effective since February 2025, General Purpose AI (GPAI) rules since August 2025, and the AI literacy obligation for staff already applies. What remains outstanding is the central milestone: full enforceability of high-risk obligations on 2 August 2026.
Implementation progress across the German financial sector is mixed. Large banks have largely completed the classification of their AI systems and built initial governance structures. Second- and third-tier institutions – savings banks, cooperative banks, mid-sized insurers – are still grappling with fundamental questions: Which of our systems actually constitute AI within the meaning of the regulation? Which qualify as high-risk? And who is responsible internally?
Compounding the challenge, the harmonised standards from European standardisation organisations CEN and CENELEC, intended to serve as guidance for conformity assessments, have not yet been finalised. Organisations must therefore build compliance structures before the complete technical reference framework is in place – a situation reminiscent of the early DORA implementation phases.
The Digital Omnibus: Reprieve or False Security?
One factor shaping debate in compliance departments is the so-called Digital Omnibus on AI. In late 2025, the European Commission proposed a legislative package that, among other things, adjusts enforcement timelines for high-risk AI systems. The core provision: Annex III systems – standalone high-risk AI such as credit scoring, HR screening or biometric identification – receive an absolute backstop date of 2 December 2027. Actual enforcement begins earlier, however, once the Commission confirms that suitable harmonised standards or common specifications are available. From that point, a six-month transition period commences.
The legislative process is advancing: on 18 March 2026, the lead committees IMCO and LIBE of the European Parliament adopted their joint position by 101 votes to 9. Trilogue negotiations between Parliament, Council and Commission have commenced under the Cypriot Council presidency, targeting a final text by May 2026.
The strategic assessment is unequivocal: even if the Digital Omnibus is adopted as planned, it postpones only the enforcement date, not the requirements. Conformity obligations remain identical. Banks that establish their compliance programmes now gain an advantage – regardless of whether the deadline reads August 2026 or December 2027. Those who wait for the Omnibus and start late assume considerable risk: DORA implementation experience demonstrates that regulatory transformation projects in financial services typically require 12 to 18 months.
What Has Changed Since February 2026: BaFin and KI-MIG
Two developments since the foundational article of February 2026 have further shaped the regulatory landscape.
BaFin Guidance on ICT Risks in AI
On 18 December 2025, BaFin published its guidance on ICT risks when deploying AI systems in financial companies. The document is formally non-binding but unmistakably signals supervisory expectations. The core message: BaFin situates AI deployment squarely within the existing ICT risk management framework under DORA, examining AI systems across their entire lifecycle – from data acquisition and model development through implementation to ongoing operations and eventual decommissioning.
Two aspects are particularly relevant for banks: first, AI systems must be systematically embedded in existing processes for risk identification, prevention, monitoring, and response and recovery. Second, the guidance explicitly addresses third-party management – an area where DORA and AI Act requirements overlap and reinforce one another.
KI-MIG: Germany’s Implementation Act
On 11 February 2026, the German cabinet adopted the KI-Marktüberwachungs- und Innovationsförderungsgesetz (KI-MIG) – Germany’s implementation act for the AI Act. The key provisions: the Federal Network Agency (Bundesnetzagentur, BNetzA) becomes the central market surveillance authority and EU contact point. A Coordination and Competence Centre for the AI Regulation (KoKIVO) is established at BNetzA to support all involved authorities in ensuring consistent legal interpretation.
For the financial sector, BaFin remains the responsible market surveillance authority – a deliberate approach that builds on existing sectoral supervisory structures rather than creating an entirely new AI-specific regulator. CRR credit institutions deploying their own AI credit scoring are therefore supervised by BaFin, not BNetzA. This hybrid supervisory approach benefits from BaFin’s deep expertise in examining risk models and IT systems – it merely needs to extend this competence to the specific AI Act requirements.
The Penalty Regime: Three Tiers, Graduated Severity
The AI Act establishes a three-tier penalty regime whose severity rivals the GDPR (General Data Protection Regulation) – and in maximum amounts actually exceeds it.
| Violation Category | Maximum Fine | Relevance for Banks |
|---|---|---|
| Prohibited AI Practices (Art. 5) | EUR 35m or 7% of worldwide annual turnover | Social scoring, manipulative systems – effective since February 2025 |
| High-Risk Obligations (Art. 9–19, 26–27) | EUR 15m or 3% of worldwide annual turnover | Credit scoring, AML, algorithmic credit decisions, insurance pricing |
| Other Breaches | EUR 7.5m or 1% of worldwide annual turnover | Incorrect or misleading information, documentation deficiencies |
For a large German bank with EUR 20 billion in annual revenue, 3% represents a risk of EUR 600 million – a figure that inevitably places the topic on the board agenda. Moreover, penalties are cumulative: an institution that simultaneously breaches multiple obligations may face multiple fines.
High-Risk AI in Financial Services: Where Requirements Apply
The AI Act follows a risk-based classification model. For financial institutions, the high-risk classifications under Annex III are most relevant. The following overview shows which typical banking applications are affected and the resulting requirements.
Directly Classified as High-Risk (Annex III, Point 5b)
AI systems intended for evaluating the creditworthiness of natural persons or establishing their credit score explicitly fall under the high-risk category. Equally captured are AI systems for risk assessment and pricing for natural persons in life and health insurance. These systems must satisfy the full compliance spine of the AI Act: Articles 9 to 19 for providers and Articles 26 and 27 for deployers.
In practice, this means for an AI-based credit scoring system:
Risk Management System (Art. 9): Continuous identification, analysis and mitigation of risks throughout the entire lifecycle
Data Governance (Art. 10): Relevance, representativeness, accuracy and completeness of training and validation data
Technical Documentation (Art. 11, Annex IV): System description, design specifications, monitoring concepts – 10-year retention obligation
Record-Keeping (Art. 12): Automatic logging of relevant events during operation
Transparency (Art. 13): Instructions for use with information on performance, limitations and intended use
Human Oversight (Art. 14): Design enabling effective human supervision, ability to intervene and override
Accuracy, Robustness, Cybersecurity (Art. 15): Appropriate performance levels, resilience against errors and attacks
Grey Area: Algorithmic Trading and AML Screening
Not all AI applications in financial services are clearly categorisable. Algorithmic trading systems that do not directly assess the creditworthiness of natural persons do not automatically fall under Annex III – but could be classified as high-risk via the risk assessment route or through sectoral regulation. Similarly, AML (Anti-Money Laundering) screening systems that evaluate natural persons potentially fall within scope.
The pragmatic recommendation for banks: when in doubt, assume the stricter category. The cost of temporary over-compliance is negligible compared to the penalties for under-compliance. Moreover, existing MaRisk and DORA requirements provide a robust foundation upon which AI Act-specific obligations can be layered with manageable effort.
The Compliance Roadmap: Six Steps in Five Months
The following roadmap is oriented towards the regulatory requirements and the practice of institutions already advanced in implementation. The steps build on one another but can be partially parallelised.
Phase 1: AI Inventory and Classification (April–May 2026)
The first and most fundamental step: a complete inventory of all AI systems within the institution. This sounds trivial but proves challenging in practice. Many banks lack a consolidated view of where AI is deployed – particularly in purchased software components, cloud services and third-party solutions, which frequently contain AI modules overlooked during internal stocktaking.
The inventory should capture the following for each system: name and description of the AI system, purpose and business area, responsible team and contact, risk category under the AI Act, classification as provider or deployer, data sources and data outputs used, operating environment (cloud, on-premises, hybrid) and conformity status. The output of this phase is an AI register – comparable to the DORA information register but specifically tailored to AI systems.
Phase 2: Gap Analysis Against AI Act Requirements (May 2026)
Based on the AI register, a systematic gap analysis follows: where does the institution already meet requirements (for instance through existing MaRisk model validation or DORA-compliant documentation), and where do gaps exist? Banks already operating robust model risk management in line with EBA (European Banking Authority) guidelines and the Principles for Effective Risk Data Aggregation (BCBS 239) will find that a substantial portion of AI Act requirements is already covered. Lifecycle-based risk management, human oversight concepts and technical documentation can largely be built on existing structures.
Phase 3: Establishing Governance Structures (May–June 2026)
AI governance is not an IT task – it belongs on the board agenda. Institutions must establish a central, independent AI governance function serving as a single point of accountability for all AI-related compliance matters. This function must be integrated into the existing Internal Control System (ICS), not stand alongside it as a parallel structure.
Specifically, this requires: clear role definitions (who approves deployment of new AI systems, who is responsible for ongoing monitoring), escalation paths (when must the board be informed), reporting lines (regular reporting to audit and compliance) and integration into committee structures (AI as a standing agenda item in risk or technology committees).
Phase 4: Technical Documentation and Conformity Assessment (June–July 2026)
For each AI system classified as high-risk, comprehensive technical documentation pursuant to Annex IV must be prepared. This encompasses a general system description, detailed information on system architecture, information about training, validation and test datasets, and the monitoring and control concept.
The conformity assessment itself can be conducted as a self-assessment under Annex VI for most financial sector applications – a notified body is required only for biometric identification and critical infrastructure. This does not mean, however, that the self-assessment may be less rigorous: BaFin will demand quality and completeness of conformity assessments during its examinations.
Phase 5: Third-Party Management and Cloud Compliance (Parallel)
BaFin’s December 2025 guidance makes clear: AI risks from third-party relationships must be systematically integrated into existing Third Party Risk Management (TPRM). This particularly concerns cloud-based AI services from major hyperscalers. Contracts must be supplemented with AI Act-compliant clauses: documentation obligations, audit and access rights, notification duties for material changes to the AI system, and exit strategies.
Requirements from DORA and the AI Act overlap significantly in this area. Institutions that have already established DORA-compliant third-party management can integrate AI-specific requirements as an extension – rather than running a separate compliance track.
Phase 6: Training, Registration and Go-Live (July–August 2026)
The AI literacy obligation under Article 4 has formally applied since February 2025. Practice shows, however, that many institutions have yet to implement a comprehensive training plan. The remaining months must be used to conduct role-specific training for developers, operators, compliance functions and decision-makers. Finally, high-risk AI systems must be registered in the EU database – a step that must be completed before deployment or before 2 August 2026 for existing systems.
Integration into Existing Regulation: The AI Act as an ICS Extension
Perhaps the most important strategic insight for banks is this: the AI Act is less a radical break than an AI-specific extension of the existing Internal Control System. Those who treat the AI Act as an isolated regulatory project waste resources and create inefficient parallel structures. The more efficient path is to embed AI Act controls into existing governance frameworks.
Connectivity is high: MaRisk provides the foundation for model risk management and governance. EBA guidelines (EGIM) already define validation and committee structures. DORA establishes the framework for ICT risk management and third-party oversight. BCBS 239 sets standards for data quality and aggregation. The AI Act adds to this framework the specific requirements for high-risk AI: lifecycle-based risk management, explicit human oversight concepts, extended technical documentation and automated logging.
Recommendations: What Banks Must Prioritise Now
The following six action areas are of immediate priority for financial institutions. They reflect the roadmap and prioritise measures by urgency and impact.
Capture, classify and map all AI systems to the risk categories of the AI Act. Critical: also identify AI components in third-party solutions and cloud services. Automated discovery processes via IT asset management can help uncover blind spots. Any institution without a consolidated AI register by end of May will not finish in time.
Do not start from scratch: use MaRisk model validation, DORA documentation and EBA guidelines as the baseline. The gap analysis should specifically identify the AI Act’s delta requirements – not rebuild the entire control system. Typical gaps: explicit human oversight concepts, lifecycle-based risk documentation, automated event logging.
Establish a central AI governance function as a single point of accountability and integrate it into the existing ICS. AI is not an IT task. The board must be informed, roles and escalation paths must be defined, and AI risks must appear as a standing agenda item in risk or technology committees.
Credit scoring, creditworthiness assessment and insurance pricing carry the highest priority. Technical documentation under Annex IV must be completed by June 2026 to allow time for the conformity assessment (self-assessment under Annex VI) and any remediation. Plan for the 10-year retention obligation.
Integrate AI risks into TPRM, particularly for cloud-based AI services. Extend contracts with documentation obligations, audit rights and notification duties. Leverage the overlap with DORA requirements – institutions that have already built DORA-compliant TPRM can efficiently integrate AI-specific extensions.
The AI literacy obligation (Art. 4) has applied since February 2025 – it is non-negotiable and unaffected by the Digital Omnibus. Conduct role-specific training for developers, operators, compliance and management. BaFin will specifically request evidence during special examinations.
Timeline: The Critical Path to August 2026
The following roadmap shows the implementation status and remaining milestones on the path to full AI Act compliance.
Five months is little time in the context of a regulatory transformation project – but it suffices if institutions act decisively now. The AI Act is not an isolated compliance project but a logical extension of the existing regulatory architecture. Those who have understood the DORA lesson know: start early, leverage existing structures, do not speculate on deadline extensions. The Digital Omnibus may shift the deadline – but supervisory and market expectations do not shift.
Read on – in your inbox every fortnight.
Capital markets insights, regulatory updates and AI trends. Concise, substantive, free.
GDPR-compliant. Cancel anytime.