The EU AI Act classifies numerous AI applications in Capital Markets and Asset Management as high-risk systems. From August 2026, stringent documentation, transparency and monitoring obligations will apply. German banks face a regulatory balancing act between innovation and compliance – with far-reaching consequences for business models, IT infrastructure and competitiveness.
The AI era under regulatory oversight
The financial industry is undergoing a technological transformation unprecedented in both speed and scope. Artificial intelligence now permeates virtually every area of banking – from algorithmic trading and portfolio management to automated risk assessment. Yet with the adoption of the EU AI Act (Regulation (EU) 2024/1689), the European Union has created a rulebook that fundamentally changes the rules of engagement for AI deployment – and hits German banks particularly hard.
Since 1 August 2024, the world's first comprehensive AI regulation has been in force. Its provisions unfold in stages through to 2027, with the decisive milestone falling on 2 August 2026: from that date, all high-risk AI systems must fully comply with regulatory requirements. For Capital Markets and Asset Management, this means a fundamental transformation, as numerous AI applications deployed in these areas fall into precisely this high-risk category.
The significance becomes clear from a glance at global competition: among the world's 30 largest banks, not a single German institution features. Whether the AI Act functions as a brake on innovation or as a framework for responsible innovation will co-determine the future viability of Germany as a financial centre.
The risk-based approach: Four tiers of increasing stringency
The AI Act follows a risk-based classification model that divides AI systems into four categories. The higher the identified risk, the stricter the regulatory requirements. For banks, this produces a differentiated landscape of compliance obligations that vary considerably by business line and use case.
| Risk tier | Banking examples | Requirements |
|---|---|---|
| Unacceptable | Social scoring, manipulative systems, real-time biometric surveillance | Complete ban since 02.02.2025 |
| High-risk | Credit scoring, AML screening, algorithmic trading decisions, risk assessment | Documentation, transparency, human oversight, risk management system, quality assurance |
| Limited risk | Chatbots, robo-advisors (initial consultation), AI-powered customer interaction | Transparency obligations: users must be informed that AI is being used |
| Minimal risk | Spam filters, text correction, internal process automation, marketing tools | No specific requirements under the AI Act |
Classification is by no means trivial. Where a system's classification is unclear, banks will tend to err on the side of caution and forgo AI deployment to avoid compliance risks. The consequence: a potential innovation bottleneck that pushes Europe's financial institutions further behind in global competition.
Capital Markets: Algorithmic trading under scrutiny
High-risk classification and its consequences
In Capital Markets, the industry has long relied on AI-powered systems: algorithmic trading, automated market analysis, high-frequency trading and predictive models for risk management are standard practice. The AI Act classifies AI systems used for creditworthiness assessment and risk evaluation as high-risk AI pursuant to Annex III of the regulation. Scoring systems for assessing creditworthiness or risk profiles are therefore subject to the most stringent requirements.
In concrete terms, this means: black-box models without traceable decision logic will be barely permissible in future. Automated credit decisions must be comprehensively justified and documented. Models relying exclusively on algorithmic correlations become harder to deploy.
Requirements for trading systems
For AI-powered trading systems, the AI Act imposes several concrete obligations. First, the regulation requires a comprehensive risk management system pursuant to Article 9 that must be continuously documented and maintained. Second, the training and validation data used must meet stringent quality standards – a challenge given the heterogeneous data landscapes of many institutions. Third, the regulation demands human oversight over all systems classified as high-risk. Fully autonomous trading decisions by AI without human control become regulatory problematic.
Additionally, technical documentation obligations under Article 11 and Annex IV require detailed information on system architecture, design specifications, and monitoring and control arrangements alongside a general system description. This documentation must be retained for ten years.
The interplay with existing regulation
Capital Markets divisions already operate in a densely regulated environment: MiFID II, MaRisk, BAIT and, since January 2025, DORA set tight guardrails. The AI Act adds a further layer to this mesh. BaFin published guidelines on ICT risks in AI deployment at the end of 2025, with particular reference to DORA requirements and the specifics of ICT risk management in AI use. The supervisor treats AI not as a regulatory special case but consistently integrates it into existing examination frameworks.
For CRR credit institutions, a peculiarity arises: for internally deployed AI credit scoring, BaFin is the competent authority, not the newly designated AI authority. AI systems must therefore simultaneously meet the regulatory requirements of financial supervision and the technical specifications of the AI Act.
Asset Management: Between innovation and regulatory pressure
AI as a strategic priority
The majority of asset managers in Germany and Luxembourg have already deployed initial AI use cases or are in the midst of development. Systems such as ChatGPT, Google Gemini or Microsoft Copilot are long-established elements of practice. Applications range from research and risk modelling to portfolio management and automated reporting – productive use cases can be observed across virtually all parts of the value chain.
An EY study on generative AI in Wealth and Asset Management shows: managers at firms with over two billion US dollars in assets under management rate the use cases of investment strategy development for alpha generation, financial advisory, and investment operations most highly. The AI Act, however, places considerable hurdles in the path of these ambitious plans.
The regulatory pitfalls
A central principle of the AI Act is: the closer an AI application operates to the customer, the stricter the regulation. For asset managers, this creates a clear dividing line. Client-facing applications such as algorithmic investment advice, automated portfolio composition, or AI-based risk assessments for investment products potentially fall under the high-risk category and are therefore subject to the strictest requirements.
More remote applications such as internal research, data preparation or back-office process automation are less heavily affected. Robo-advisors in initial consultation fall as chatbot-like systems under the limited-risk category with primarily transparency obligations.
The cloud problem: FISA Section 702
An additional dimension of complexity arises from the industry's cloud dependency. According to the KPMG Cloud Monitor 2025, 97 per cent of financial enterprises use the cloud services of US hyperscalers for AI operations. This carries a specific risk: FISA Section 702 of the US Foreign Intelligence Surveillance Act permits the targeted surveillance of data outside the US – including all information stored in the cloud. The hyperscalers are subject to a secret cooperation obligation and may not inform the public. For asset managers working with sensitive client data and proprietary investment strategies, this represents a considerable governance challenge.
Consolidation pressure
The PwC Global Asset and Wealth Management Survey forecasts that by 2027, around 16 per cent of all AWM firms will either be acquired or exit the market – double the historical attrition rate. The AI Act intensifies this pressure: smaller firms unable to bear the regulatory compliance costs face a structural disadvantage against large institutions that possess the resources for comprehensive AI governance.
Third-party management: New obligations along the supply chain
The AI Act changes not only the internal handling of AI but directly impacts outsourcing and collaboration with external service providers. Financial institutions cannot delegate responsibility for compliant AI deployment to third-party providers – they must actively ensure that outsourced AI systems are operated transparently, securely and in compliance.
Specifically, banks must integrate AI risks into their Third Party Risk Management (TPRM) and assess them regularly. Contracts with service providers require new clauses on documentation, audit and access rights, and reporting obligations. The EBA has also further tightened requirements on third-party collaboration through new guidelines.
Regardless of whether AI systems are developed in-house or procured from providers such as OpenAI, Microsoft or Google: a thorough analysis of each use case with regard to the applicable risk class and the derivation of regulatory requirements is indispensable. Violations risk not only reputational damage but also substantial fines.
The new supervisory architecture
In Germany, the Federal Network Agency (Bundesnetzagentur, BNetzA) assumes the role of the central AI market surveillance authority. It monitors compliance with the AI Act across sectors. The German Accreditation Body (DAkkS) serves as the notifying authority. Sector-specific supervision continues to be exercised by BaFin, which will further specify its expectations on AI governance, risk management and internal controls in the financial sector.
At European level, the EU Commission has established the European AI Office as a central competence centre. It supports companies in implementing the AI Act, has developed a Code of Practice for General-Purpose Models, and cooperates with model providers such as Mistral AI and the German project Open Hippo. The AI Office's guidelines on the AI definition and prohibited AI applications from February 2025, however, are not legally binding and serve solely as guidance.
The German Banking Association (Bankenverband) formulated a central demand in its position paper of July 2025: the implementation of the regulation must be practicable, legally certain and uniform across the EU. In particular, consistent interplay with existing prudential requirements must be ensured and double regulation avoided.
AI regulatory sandboxes: A glimmer of hope
The AI Act provides for the establishment of AI regulatory sandboxes in all EU Member States. These offer banks the opportunity to test and validate new AI applications in a controlled environment before they enter regular operation.
For Capital Markets divisions and asset managers, this could be a decisive mechanism for developing innovative solutions whilst simultaneously identifying potential risks. The sandboxes enable a dialogue between institutions and supervisory authorities that can be valuable for both sides: banks gain regulatory clarity, authorities gain insight into technological practice.
Recommendations: What German banks must do now
The clock is ticking. With the 2 August 2026 deadline, German banks have little time left to secure their AI landscape from a regulatory perspective. The following six action areas are of immediate priority for Capital Markets and Asset Management divisions:
All deployed AI systems must be identified, documented and assigned to the AI Act's risk categories. This explicitly includes AI components in third-party solutions and cloud services. Automated detection procedures via IT monitoring can support this process.
A central, independent governance function must be installed as a single point of accountability for all AI-related topics. AI governance is not an IT task – it belongs on the board agenda. Integration into existing structures such as the internal control system (ICS) or MaRisk ensures consistency.
Credit scoring, algorithmic trading systems, AML screening and automated risk assessments must fully meet the requirements for documentation, transparency, data quality and human oversight by August 2026.
AI risks must be systematically integrated into existing TPRM. Contracts with service providers and cloud providers must be supplemented with AI Act-compliant clauses. Particular attention must be paid to cloud compliance taking FISA Section 702 into account.
The AI competence obligation has been in force since February 2025. Institutions must establish systematic training programmes for all employees – from trading desks to portfolio management to compliance functions.
Institutions should seek early dialogue with the Bundesnetzagentur and BaFin and test innovative AI use cases in the regulatory sandboxes. This creates compliance certainty and positions the bank as a responsible driver of innovation.
The EU AI Act is not a temporary regulatory burden but a paradigm shift. Banks that act proactively and understand AI governance as a strategic success factor will not only create regulatory certainty – they also lay the foundation for sustainable, responsible AI deployment and thus long-term competitiveness. Those who delay implementation risk not only substantial sanctions but also losing touch with an industry transforming at rapid pace.
Timeline: EU AI Act implementation roadmap
Phased entry into force and action required for German banks in Capital Markets and Asset Management.
Keep reading – in your inbox every two weeks.
Capital markets insights, regulatory updates and AI trends. Concise, well-founded, free of charge.
GDPR-compliant. Unsubscribe at any time.