29. April 2026 By Dr. Alisa Küper and Dr. Maximilian Wächter
AI Governance in Banking: Why Less Process Can Mean Greater Security
In our work with banks and financial institutions, we observe a pattern of over-regulation of AI applications; the resulting diffusion of responsibility and uncertainty amongst staff often achieves exactly the opposite of what AI governance was intended to achieve.
But let’s start from the beginning: the desire to deploy AI is there. So is the pressure to deliver quickly: the market demands efficiency, automation and new customer experiences driven by generative AI. Furthermore, an acute skills shortage is intensifying the pressure to act – particularly in IT, where experts are desperately sought after and positions often remain unfilled for months. AI is therefore no longer an option, but an operational necessity: where qualified talent is lacking, intelligent automation must fill precisely this gap.
And then comes governance. In many projects, we see how the impulse to fully comply with the EU AI Act, DORA, MaRisk and BAIT ends in well-intentioned but paralysing over-regulation: Internal requirements are formulated so restrictively that they confuse even experienced development teams. Processes are designed to be overly complex, promising pilot projects are massively slowed down or discontinued altogether due to organisational overhead.
The pattern is understandable but unjustified, because governance must not and should not be an obstacle. Set up correctly, it does not cost speed – it provides the security needed to pick up speed in the first place.
Why new technology demands new governance
Banks traditionally stand for security and risk minimisation. When probabilistic – and thus inherently imprecise – technologies encounter highly sensitive financial data, the departments responsible for compliance are rightly on high alert. The result, however, is often an organisational jumble of responsibilities: data protection, staff councils, corporate strategy and risk management all demand their place at the table when it comes to the use of AI.
In principle, this is correct; in practice, however, this diversity of voices often leads to a diffusion of responsibility. When all employees or those involved in a project voice concerns, but no one takes operational responsibility for the risks, even harmless applications fall by the wayside. An internal chatbot designed to summarise meeting minutes goes through the same approval loops as an automated credit scoring system for retail customers – even though the risk profiles are fundamentally different.
This internal bottleneck inadvertently provokes precisely the risk it is meant to prevent: departments, frustrated, migrate to uncontrolled shadow IT. Teams then use external AI tools without any safeguards because official channels seem too long and too opaque. The result is not fewer, but more uncontrolled risks. An IDC survey also confirms that a lack of governance is one of the biggest barriers to AI adoption: 53.8% of the organisations surveyed cited a lack of governance solutions as the top barrier – only costs ranked just ahead of it (Jyoti, 2023).
The blind spot: Why AI Act compliance does not guarantee quality
To understand why excessive governance arises, it is worth taking a closer look at the regulatory landscape – for this is where an often-overlooked misunderstanding lies.
The EU AI Act is primarily a product safety law designed to protect citizens’ fundamental rights: it addresses discrimination, non-transparent decision-making and disproportionate surveillance. DORA, on the other hand, focuses on operational resilience and requires AI systems to be integrated into existing risk management frameworks as ICT assets (BaFin, 2025). Both sets of regulations are important, but neither protects the bank from business risks arising from faulty AI applications.
An AI system can be fully AI Act-compliant and DORA-integrated – operated transparently, non-discriminatorily and resiliently – and yet still provide incorrect answers, be vulnerable to prompt injection attacks, or simply operate inefficiently. Robustness, accuracy and IT security are crucial factors for banks that go far beyond the mere text of the law. The BSI guide on evasion attacks on LLMs demonstrates just how concrete and diverse the technical attack scenarios are: it describes prompt injection, jailbreaks and adversarial attacks as real threats for which regulatory compliance offers no solution (BSI, 2026).
This has two significant implications: Firstly, the compliance department cannot manage AI governance on its own. A framework is needed that combines regulatory requirements, technical quality and business management. Secondly – and this is the key insight – this framework does not have to be complex. Anyone attempting to mould every requirement from the AI Act, DORA and internal guidelines into a monolithic governance process creates precisely the kind of overhead that paralyses projects.
Rethinking governance: guardrails instead of barriers
BaFin itself provides the crucial guidance in its guidance on ICT risks in AI: The principle of proportionality (Art. 4 DORA) as an operational principle. AI applications integrated into critical functions require more extensive security and control measures than a self-service assistant operating under human supervision – this differentiation is not only permitted, it is intended by the regulator (BaFin, 2025). In concrete terms, this means: banks do not need parallel AI governance – what is required is a streamlined extension of existing structures to include a few, clearly defined roles and processes. DORA already requires a management body with ultimate responsibility for ICT risks, an ICT risk management function and control functions (Art. 5 DORA). The key lies in embedding AI governance within these existing structures.
Based on our work, three measures have proven particularly effective:
1. A dedicated AI coordinator as a central point of contact.
This role – whether as an AI Officer, AI coordinator or as part of the existing risk management team – acts as a hub between business units, IT, compliance and legal. Instead of each department escalating concerns separately, there is a single body responsible for governance artefacts, assessing use cases, classifying risks and coordinating approvals. This reduces diffusion of responsibility and speeds up decision-making.
2. Risk-based classification with clear fast lanes.
A standardised preliminary check (triage) determines right at the start of the ideation phase whether a use case should be classified as high-risk or can be operated as low-risk, as most internal efficiency cases do not fall under the strictest provisions of the AI Act. Here, the bank or financial services provider can establish fast-track approval processes – whilst critical systems undergo the necessary in-depth scrutiny.
3. Technical quality assurance as an integral component.
Whilst the AI Act sets out the regulatory framework, technical standards such as the NIST AI Risk Management Framework or the OWASP Top 10 for LLMs ensure the actual quality and security of the models. Once the AI infrastructure has been accepted as DORA-compliant, applications built on top of it do not have to start from scratch every time – standardised modules for logging, monitoring and data protection significantly reduce the effort required per project.
From compliance check to competitive advantage
Those who view governance as a quality standard build trustworthy AI systems. In an industry where trust is the most important currency, this becomes a genuine differentiator.
A lean, risk-based governance approach drastically shortens time-to-market and may replace lengthy approval processes with defined scope for action. The result is scalable solutions that do not remain at the PoC stage, but generate real business value – whether through more efficient back-office processes, personalised customer engagement or the relief of overburdened specialist teams.
We support you on this journey
It is worth noting: the balancing act between the pressure to innovate and regulatory requirements is challenging, but achievable. adesso combines a deep understanding of the banking sector with expertise in artificial intelligence. We help you build technically excellent AI quickly and securely. From the initial risk assessment through agile compliance processes to the [WM (1] [AK2] technical implementation of DORA-compliant AI platforms, we provide you with comprehensive support. Fully aware that it is not the technology alone that determines success, but rather the acceptance of the new processes within the organisation, we ensure that your AI initiatives do not end up at the compliance desk, but are implemented in the business.
Generative AI in Banking
A driver of innovation for sustainable business models
For the financial sector, AI is no longer a distant prospect, but a reality. As a key technological driver, it is becoming increasingly important for competitive value creation in banking. It is evolving into an integral part of a sustainable IT strategy.