The wave has arrived. Four out of five financial institutions are working with artificial intelligence in some form. Behind this seemingly final adoption headline, however, the April 2026 study published by the Cambridge Centre for Alternative Finance (CCAF) together with the Bank for International Settlements (BIS), the International Monetary Fund (IMF), the World Economic Forum (WEF), the Inter-American Development Bank (IDB), CGAP and the Arab Monetary Fund (AMF) presents an unusually sober diagnosis. It analyses 628 responses from financial institutions, AI vendors and supervisory authorities across 151 jurisdictions – and describes an industry caught in a structural straddle between flat-out piloting and genuine strategic delivery.

At a glance

Study: The 2026 Global AI in Financial Services Report (CCAF, 28 April 2026, 140 pages)

Sample: 628 respondents across 151 jurisdictions – 203 fintechs, 149 traditional financial institutions, 146 AI vendors, 130 regulators

Partners: BIS, IMF, WEF, IDB, CGAP, AMF, Financial Innovation for Impact (Fii), UK FCDO

Headline finding: 81 per cent AI adoption, but only 14 per cent strategic transformation – a documented execution gap

The adoption number is misleading

The headline figure of 81 per cent suggests a transition is over. In reality, only 40 per cent of surveyed industry respondents place themselves in the “Scaling” or “Transforming” maturity stages. Just 14 per cent regard AI as “transformational” for their strategy and competitive position. Among supervisors, the picture is more striking still: 48 per cent remain in the “Exploring” phase and 33 per cent are “Piloting” – only 18 per cent have reached “Scaling”. Adoption is not the same as transformation. Today, it is a stocktake, not a competitive advantage.

The methodological discipline of the Cambridge researchers around Bryan Zhang and Kieran Garvey is notable. They distinguish between experimentation, selective scaling and genuine business model response. Buying ChatGPT licences across the organisation is adoption. Substantively rethinking credit processes, market risk models or advisory logic is transformation. The gap between the two is unlikely to close numerically as long as investment patterns and capability building do not align.

The private sector is deploying advanced AI systems faster than supervisory frameworks and technical capacity can keep pace. CCAF Foreword, page 4

Where value creation burns away

Productivity gains are visible in the data: 79 per cent report positive effects in technology, data and product; 75 per cent in operations and back office; 69 per cent in client-facing roles. Yet only 40 per cent of respondents see a positive effect on profitability. 43 per cent see none, and 17 per cent “do not know”. That is the sobering split – activity without a quantifiable lever.

The real problem is not the negative finding. It is the measurement gap. 60 per cent even of firms that classify themselves as “Scaling” report difficulty quantifying the value of their AI investments. For large institutions with more than 5,000 employees, the figure rises to 76 per cent. Three out of four major banks cannot soundly quantify what their AI investments actually deliver – or whether the implementation justifies the effort.

The correlation between investment levels and profitability is statistically clear: firms spending more than USD 100,000 per year on AI report profitability gains in 62 per cent of cases, against 39 per cent for lower spenders. But the report itself flags an important caveat: “spending may be a symptom rather than a cause of higher AI maturity”. Higher spending can be as much an indicator as a driver of maturity. Pumping up the budget without the corresponding capability-building programme does not buy a place on the profitability curve.

The fintech lead is measurable – and unsettling

One of the study’s most robust findings is the gap between fintechs and incumbent financial institutions. 47 per cent of fintechs report advanced AI maturity (Scaling or Transforming) – against only 30 per cent of incumbents. In “Transforming”: 19 to 6 per cent. In profitability: 56 to 34 per cent. In the front office, the productivity advantage is 76 to 59 per cent. Fintechs do not have a secret AI sauce. They have less legacy. That alone is enough for a structural advantage.

For German and European major banks, this means the competitive contest is shifting from a comparison of balance sheets to a comparison of adoption velocity. 36 per cent of industry cite access to AI talent as a key value driver, against only 6 per cent of vendors. The industry is hunting for specialists; the vendor market is selling software. The gap is widening.

Agentic AI as the next cyber vector

Behind the adoption and value questions sits a risk discussion drawn unusually sharply by the report. 52 per cent of industry respondents already report active deployment of agentic AI – that is, systems that not only respond but plan and act autonomously. Most of this happens in software engineering: 42 per cent of respondents have AI-supported code generation fully deployed, with 33 per cent in development. That makes software engineering the most mature AI application across the industry.

The Cambridge authors draw a cyber inference that ought to reach the boardroom. Manual code reviews no longer scale once AI lifts code-generation volume and velocity by orders of magnitude. The report references recent Anthropic disclosures pointing to “next generations of AI models … incredibly capable of exploiting software vulnerabilities” (page 83). The next model generation could be code generator and vulnerability scanner at the same time. 50 per cent of industry and 57 per cent of supervisors cite adversarial AI cyber threats as a top risk. AI vendors themselves, at 35 per cent, are noticeably more relaxed – a perception gap that alarms the supervisors more than the industry.

Then there is what the study calls “loss of human oversight and collective forgetting”: the loss of institutional knowledge as automation displaces manual procedures. 55 per cent of industry classify it as critical – higher than the supervisors’ 42 per cent. Anyone whose staff cannot run the manual fallback when AI systems fail in a crisis is sitting on an operational problem that the pre-mortem did not model.

What “Collective Forgetting” Means

Definition: the loss of institutional memory and manual operating capability as automation absorbs day-to-day execution.

Risk mechanic: in a crisis, when AI systems fail or inputs become unreliable, staff can no longer execute the task by hand.

Industry weight: 55 per cent (the third-highest risk after data privacy and hallucinations). Supervisors: 42 per cent.

Supervisors playing catch-up

The study is also the most global stocktake yet of what financial supervisors are actually doing – and what they are not. 130 supervisory authorities across 151 jurisdictions were surveyed. The EU AI Act, with 42 per cent of references, is the world’s most cited framework, followed by sector-specific guidance (Financial Stability Board, IOSCO, Basel) at 41 per cent and ISO/IEC AI standards (such as ISO 42001) at 27 per cent. Europe leads with 59 per cent of authorities reporting established frameworks – while Latin America sits at 58 per cent without a national framework.

Within supervisory bodies, the dominant use case is market surveillance and misconduct detection (31 per cent in pilot or deployment), followed by AML/CFT supervision (27 per cent) and consumer protection (25 per cent). However, only 24 per cent collect any data at all on the AI adoption of their supervised entities; 43 per cent do not plan to do so within the next two years. A supervisor who does not measurably know what its risk subject is doing has structurally delegated supervision.

The pain point comparison brings the gap into sharper relief: 48 per cent of supervisors cite a lack of AI training and capacity building as the top barrier – against only 17 per cent of industry. 45 per cent complain about insufficient technical infrastructure, against 28 per cent of industry. The BIS Innovation Hub, working with the Hong Kong Monetary Authority and the UK Financial Conduct Authority, has begun developing an explainability toolkit for supervisors under the name Project Noor. Welcome – and late.

The investment required to build supervisory capacity in tools, training and data collection has not yet been made at the scale the ambition requires. CCAF Report, Chapter 7, page 120

The explainability divide

The regulatory stress test of the next few years can be read off a single number. 79 per cent of surveyed supervisors regard explainability as critical or important. Only 50 per cent of industry has adopted explainable AI methods at all. Two-thirds do not monitor their AI systems for discrimination, exclusion or systemic bias. Only 37 per cent classify model opacity as an operational risk.

In credit underwriting and insurance pricing – both classified as high-risk AI under the EU AI Act – the discrepancy is not just a compliance topic. It is a business model topic. An institution that declines without being able to explain why does not have an AI problem. It has a licensing problem.

What 2030 will decide – and what it will not

The outlook deserves separate attention. 81 per cent of industry expect agentic AI to be substantively established by 2030 – the largest expected growth trajectory across all technology categories. The artificial general intelligence (AGI) discussion gets more interesting: 44 per cent of all respondents expect AGI by 2030 – but with a sharp stakeholder gradient behind it. AI vendors expect AGI by 2030 at 51 per cent, industry at 50 per cent, supervisors only at 28 per cent. Anyone setting AGI probability high also has a commercial interest in that expectation. The study explicitly calls the phenomenon a “paradox”: today, AGI ranks 21 out of 22 prioritised risks – only 9 per cent see it as a top concern.

Two trends buried in the concluding thoughts deserve more attention than they have received in the industry conversation so far. First, the intersection between verified identity – the foundation of the KYC-regulated financial industry – and the pseudonymous architecture of the internet. AI makes both sides more capable: synthetic identities and automated identity verification. The Cambridge authors put it bluntly: “decisions taken by financial regulators and institutions are likely to exert disproportionate influence on the digital identity standards that emerge for the wider economy” (page 129). The question of who is “real” in a transaction is becoming an infrastructure question for the next decade, well beyond financial services.

Second, the trend towards “world models” – systems that build causal representations of their environment rather than primarily detecting patterns. Plausible applications include risk management, scenario generation and portfolio modelling. Anyone defining their AI strategy today as “generative AI plus agents” is planning for the previous wave but one.

Concentration as an underrated systemic risk

One detail that has so far slipped under the radar: 88 per cent of industry users of Google Cloud also deploy Google’s foundation models. For Microsoft the figure is 35 per cent, for Amazon Web Services 23 per cent. Vertical integration within the same tech stacks creates concentration points where outages or attacks transmit shocks system-wide. OpenAI is the dominant foundation model provider for industry at 76 per cent, followed by Google at 57 per cent and Anthropic at 35 per cent. DeepSeek is used by 15 per cent – a figure European supervisors should be tracking from an operational resilience perspective. Only 18 per cent of surveyed supervisors collect data on AI third-party dependencies at all.

For DACH banks, this points to a concrete task: third-party risk frameworks built on the Digital Operational Resilience Act (DORA) and ICT outsourcing need an AI-specific layer. Cloud foundation model concentration is not just a market topic. It is a stress-testing topic.

Recommendations: six levers for European banks

The Cambridge findings translate into six priority action areas for European financial institutions. They are not new in wording. They are new in the scale of evidence now behind them.

1. Capability building, not procurement

Treat AI strategy as an organisational maturity task, not a purchasing exercise. Higher investment correlates with better outcomes – but only when talent, data quality and governance grow alongside it. 62 per cent of firms spending more than USD 100,000 per year on AI reach advanced maturity. The Cambridge caveat “spending may be a symptom rather than a cause” demands that budgets be paired with organisational maturity, not used as a substitute for it.

2. Anchor agentic AI governance now

Build risk frameworks for agent systems before software engineering use cases reach production. Code provenance standards, automated security reviews and anti-tampering controls for AI-generated patches belong in the risk appetite. 42 per cent of industry already has code generation fully deployed – manual reviews no longer scale once volume and velocity grow by orders of magnitude.

3. Operationalise explainability

Embed explainable AI methods in audit trails, beyond compliance checkboxes. Project Noor (BIS / HKMA / FCA UK) and comparable supervisory toolkits will structurally raise the burden of proof. Anyone unable to deliver model-independent attribution today will have a conformity problem in 2027 – especially under EU AI Act high-risk categories.

4. Build the reskilling pipeline

25 per cent of industry expect significant reskilling, 24 per cent expect a net reduction in roles by 2030. The task will be decided at the workforce level, not the technology level. Capability maps for the five most AI-affected functions need to be drafted today, not in 2028. Especially in commercial and wholesale banking, where 44 per cent of respondents expect a net increase in roles, the targeted build-up of hybrid profiles – banking depth plus AI competence – is worthwhile.

5. Fold concentration risk into the DORA framework

Classify cloud foundation model concentration as a third-party risk factor. Pursue multi-vendor strategies for critical use cases, and extend exit plans under DORA Article 28 to the AI stack. The fact that 88 per cent of Google Cloud users also deploy Google foundation models is more than vendor lock-in – it is a shock transmission path that only 18 per cent of supervisors are even measuring today.

6. Upgrade the measurement apparatus

Without sound value measurement, the investment case becomes internally untenable. Quantify productivity, quality and risk offset per AI initiative – not just “AI Spend YoY”. Three out of four major banks cannot soundly measure the value of their AI investments. Anyone who still cannot do so in 24 months will lose the internal argument first, the external argument second.

Timeline: the next 24 months

28 April 2026
CCAF report published
The Cambridge Centre for Alternative Finance publishes the 2026 Global AI in Financial Services Report with BIS, IMF, WEF, IDB, CGAP and AMF.
May 2026
First supervisory data collection initiatives
Supervisors begin to collect data on AI third-party dependencies in response to the documented concentration finding.
2 August 2026
EU AI Act high-risk obligations applicable
Full applicability of high-risk provisions for creditworthiness assessment and insurance pricing. Explainability becomes a licensing topic.
Q4 2026
Project Noor toolkits
First workshop toolkits from the BIS Innovation Hub, HKMA and FCA UK for supervisors; standardised explainability diagnostics in pilot.
2027
DORA AI extension
Integration of AI-specific third-party frameworks into DORA supervisory practice; ISO/IEC 42001 as the de facto standard for AI management systems.
2030
Agentic AI roll-out and the AGI question
81 per cent of industry expect substantive agentic AI roll-out; 44 per cent expect AGI – supervisors at 28 per cent are markedly more cautious.
Bottom line

The Cambridge study is the soberest global stocktake of AI in financial services to date. Its strength does not lie in its headline but in its discipline: it distinguishes between adoption and transformation, between investment and maturity, between vendor expectations and supervisory reality. For European banks the central task is not to deploy more AI. It is to absorb AI organisationally – with talent, with governance, with measurable outcomes, and with the willingness to anticipate regulatory requirements rather than endure them.

Sources

This article draws on the primary sources listed below. Given the importance of the study and the use of direct quotes, a sources table is provided here as an exception.

# Publisher Title Link
1 Cambridge Centre for Alternative Finance (CCAF), Judge Business School, University of Cambridge The 2026 Global AI in Financial Services Report: Adoption, impact and risks (April 2026, 140 pages) jbs.cam.ac.uk
2 CCAF / BIS / IMF / WEF / IDB / CGAP / AMF / FCDO Global survey of 628 respondents (203 fintechs, 149 traditional FIs, 146 AI vendors, 130 regulators) across 151 jurisdictions PDF (8.2 MB)
3 BIS Innovation Hub / HKMA / FCA UK Project Noor – Explainable AI Toolkit for Financial Supervisors (Partner Perspective in the CCAF Report, p. 92) bis.org/bisih
4 Anthropic “Mythos” disclosures (referenced in the CCAF Report, p. 83) on the capability of next-generation models to exploit software vulnerabilities anthropic.com/research
5 European Union Artificial Intelligence Act (Regulation (EU) 2024/1689); applicability of high-risk provisions from 2 August 2026 eur-lex.europa.eu
6 European Union Digital Operational Resilience Act (DORA, Regulation (EU) 2022/2554); ICT third-party risk framework eur-lex.europa.eu
7 ISO/IEC ISO/IEC 42001:2023 – AI Management System Standard iso.org/42001
8 World Bank Group Report to G20 South Africa Presidency: AI Adoption among Financial Authorities (summarised in the CCAF Report, pp. 117–119) worldbank.org
9 CGAP (Consultative Group to Assist the Poor) Powering AI with Inclusive Data: A Roadmap for Financial Inclusion (Working Paper, 2026; Partner Perspective in the CCAF Report, p. 76) cgap.org
Christian Schablitzki

Christian Schablitzki

Strategy & Management Consultant · Agentic AI specialist for financial institutions

More than 20 years in investment banking and derivatives trading, followed by over 10 years advising financial institutions. Currently Partner at Infosys Consulting in Germany. Certified in Google AI, Generative AI Leader (Google Cloud) and IBM RAG and Agentic AI.

LinkedIn profile →
newsletter
the agentic banker

Read on – every two weeks in your inbox.

Capital markets insights, regulatory updates and AI trends. Concise, well-sourced, free.

GDPR-compliant. Unsubscribe at any time.

← Back to overview