The EU AI Act, decoded for builders and deployers.
Regulation (EU) 2024/1689. Entered into force 1 August 2024. Phased application: prohibited practices from 2 February 2025; GPAI obligations from 2 August 2025; the bulk of high-risk obligations from 2 August 2026; high-risk AI systems embedded in Annex I products from 2 August 2027.
Who's in scope
The Regulation applies extraterritorially. A provider established outside the EU is in scope if its AI system or model is placed on the Union market or used inside the Union. The five regulated roles — provider, deployer, importer, distributor, authorised representative — each carry distinct obligations, and a single firm often takes on multiple roles for the same system across its lifecycle.
Deployers in financial services, insurance, hiring, education, and law enforcement carry obligations even when they did not build the model; if you fine-tune, substantially modify, or rebrand a system you generally inherit provider duties for the modified version.
The four risk tiers
- Unacceptable risk (Article 5) — eight prohibited practices, including subliminal or manipulative techniques causing harm, social scoring by public authorities, untargeted scraping for facial-recognition databases, and most real-time remote biometric identification in publicly accessible spaces. Prohibited from 2 February 2025.
- High risk (Article 6, Annex III, Annex I) — eight Annex III categories (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice) plus AI used as a safety component of products covered by Annex I sectoral legislation (medical devices, machinery, toys, vehicles, etc.). Article 6(3) carves out an exception when the system performs a narrow procedural task or only improves human review.
- Limited risk (Article 50) — transparency duties for chatbots, emotion-recognition, biometric categorisation, deepfakes, and AI-generated text on matters of public interest.
- Minimal risk — everything else; no specific obligations under the AI Act.
High-risk obligations (Articles 8–15)
- Risk management system maintained across the lifecycle.
- Data and data governance: training, validation, and testing data subject to quality criteria.
- Technical documentation aligned with Annex IV.
- Record-keeping (logs, automatically generated where appropriate).
- Transparency to deployers, including instructions for use.
- Human oversight measures designed into the system.
- Accuracy, robustness, cybersecurity.
- Quality management system (Article 17), conformity assessment (Article 43), CE marking, registration in the EU database (Article 49), post-market monitoring (Article 72), serious-incident reporting.
The GPAI track
Articles 51–55 introduce a separate regime for general-purpose AI models. Two tiers: standard GPAI (Article 53) — technical documentation, training-data transparency summary, EU copyright compliance — and GPAI with systemic risk (Article 55), triggered by the 10²⁵ FLOPs training-compute threshold or AI Office designation. Systemic-risk models add model evaluation, adversarial testing, serious-incident reporting, and cybersecurity protection.
The Code of Practice — published by the AI Office — is the practical interpretive layer most providers will follow until harmonised standards are in place.
Where Sia RegAI fits
Sia RegAI ingests the AI Act, the published Codes of Practice, the AI Office guidance, and any national supervisory positions you point it at. It runs the eight-step high-risk decision tree on a system-by-system basis, produces the Annex IV technical-documentation pack, and tracks gaps against the Article 17 quality-management system. Banks deploying AI for credit scoring or insurance pricing get the AI Act mapped alongside their existing prudential framework — the obligation tree is shared so you see overlaps, not silos.
The timeline you can't miss
- 2 February 2025 — prohibited-practice articles in force; AI literacy obligation (Article 4) applies to providers and deployers.
- 2 August 2025 — GPAI obligations (Article 53), penalties regime, governance bodies operational. National competent authorities designated.
- 2 August 2026 — bulk of high-risk obligations apply. Conformity assessment infrastructure expected to be in place.
- 2 August 2027 — Annex I product-embedded high-risk systems (medical devices, machinery, vehicles).
Related guides
- EU AI Act high-risk classification — a decision tree for AI builders
- EU AI Act compliance for banks — obligations decoded
- ISO 42001 vs NIST AI RMF — which AI governance framework should you adopt?
- NIST AI RMF for tech — from Govern to Measure in 30 days