EU AI Act compliance for banks — obligations, decoded.
The EU AI Act doesn't single out banks, but it lands hardest on them. Credit-scoring is named explicitly as a high-risk use. So is biometric customer authentication. Add fraud-detection, AML transaction-screening, and AI-driven customer service, and most banks have a dozen or more in-scope systems. Here's the working map.
This guide cuts the EU AI Act down to the parts a Tier-1 European bank actually has to act on, with timelines and operational obligations. Written from Sia engagements supporting both EU-headquartered and US-headquartered banks navigating AI-Act readiness.
The Act in one paragraph
The EU AI Act establishes risk-tiered rules for AI systems sold or used in the EU. Four tiers: prohibited (banned outright), high-risk (heavy obligations: risk management, data governance, documentation, human oversight, accuracy & robustness, post-market monitoring), limited-risk (transparency obligations: tell users they're talking to AI), and minimal-risk (no obligations). Most regulated bank AI lands in high-risk.
Where banks land in the risk tiers
Prohibited (Article 5)
- Social-scoring systems that result in detrimental treatment in unrelated contexts. Few banks operate these, but generic "customer health scores" used to deny services across product lines should be reviewed.
- Predicting criminal offences based purely on profiling. Watch how AML systems are framed.
High-risk (Annex III)
Annex III explicitly names two banking use cases:
- Annex III §5(b) — "AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score." This captures most retail credit-decisioning models.
- Annex III §5(c) — "AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance." Relevant for bancassurance arms.
Plus indirect captures: AI-based remote biometric authentication for customer onboarding, AI used in employment decisions (recruitment / promotion), AI in essential services. Most banks have AI hitting at least one of these.
Limited-risk (Article 50)
Customer-facing chatbots, AI-generated content, deepfake disclosures. Obligation: clear disclosure that users are interacting with an AI system. Easier to comply with — usually a UI / consent change.
What "high-risk" obligations actually mean
For each high-risk system, a bank must document:
- Risk management system (Art. 9) — continuous, iterative, documented; covering known and reasonably foreseeable risks across the lifecycle.
- Data and data governance (Art. 10) — training, validation, and testing datasets must be relevant, representative, free of errors, and complete. Includes bias detection and correction.
- Technical documentation (Art. 11, Annex IV) — comprehensive: system description, design choices, training methodology, performance metrics, known limitations, post-market monitoring plan.
- Record-keeping (Art. 12) — automatic logs of operation; retained for the natural lifetime of the system.
- Transparency & user information (Art. 13) — instructions for use covering capabilities, limitations, expected accuracy, and human-oversight measures.
- Human oversight (Art. 14) — designed-in measures that allow a natural person to understand, monitor, intervene in, or override the AI system.
- Accuracy, robustness, cybersecurity (Art. 15) — declared accuracy levels, resilience to errors, resilience to adversarial attacks.
- Post-market monitoring (Art. 72) — active and systematic collection of data on the system's performance, with a plan to use it.
A typical credit-scoring model needs 80–120 distinct artifacts to pass Annex IV documentation review. The Sia engagement average is 6 weeks per model to get to a defensible package.
Timeline you should care about
- February 2025 — Prohibitions live. AI literacy obligations apply.
- August 2025 — General-purpose AI model (GPAI) obligations live for providers; codes of practice published.
- August 2026 — Most high-risk obligations live for systems put into service after this date.
- August 2027 — Annex III high-risk obligations live for systems already on the market.
If your AI inventory has 30 high-risk systems and you start prep in Q4 2025, you have 6–9 months per system at full pace to make the August 2026 deadline. Most banks need to compress that by parallelizing across vendors and using shared documentation templates.
How to scope the work in a bank
- Inventory all AI systems. "AI" includes anything in scope of the Act's broad definition (Art. 3). Rule-based scorecards generally don't count; ML-based ones do.
- Classify each. Prohibited / High-risk / Limited / Minimal. Document the classification rationale.
- Map to Annex IV requirements. For each high-risk system, identify what documentation already exists vs. what needs to be created.
- Gap-fill in priority order. Models on the customer journey first, internal models second.
- Establish post-market monitoring. The most-missed obligation: Art. 72 requires active monitoring, not just incident response.
Where RegAI helps
RegAI ingests the AI Act Level 1 text plus the published delegated acts and harmonized standards (e.g., the AI Act-specific ISO 42001 mapping). For each system in your AI inventory, RegAI:
- Surfaces the applicable obligations based on the system's classification and use case.
- Maps your existing model documentation against Annex IV requirements, scoring coverage Full / Partial / Not covered.
- Drafts the missing artifacts (system description, intended purpose, accuracy declarations) modeled on the Annex IV structure.
- Produces a defensible compliance package per system, with citations back to the Act's exact paragraphs.
On a recent hyperscaler engagement, this cut model-readiness time from ~6 weeks per model to ~2 weeks. See the tech vertical →
Closing
The AI Act isn't going to settle. Codes of practice, harmonized standards, and ESA Q&As will keep evolving. Build readiness on a system that updates with the source text, not a Word doc that ages out the day it ships.
Book a 45-minute walkthrough → on your own AI inventory and a sample model card.
