Back to Sia Partners A Sia product
Sia RegAI  /  Frameworks  /  EU AI Act
Framework · AI & technology

The EU AI Act, decoded for builders and deployers.

Regulation (EU) 2024/1689. Entered into force 1 August 2024. Phased application: prohibited practices from 2 February 2025; GPAI obligations from 2 August 2025; the bulk of high-risk obligations from 2 August 2026; high-risk AI systems embedded in Annex I products from 2 August 2027.

Who's in scope

The Regulation applies extraterritorially. A provider established outside the EU is in scope if its AI system or model is placed on the Union market or used inside the Union. The five regulated roles — provider, deployer, importer, distributor, authorised representative — each carry distinct obligations, and a single firm often takes on multiple roles for the same system across its lifecycle.

Deployers in financial services, insurance, hiring, education, and law enforcement carry obligations even when they did not build the model; if you fine-tune, substantially modify, or rebrand a system you generally inherit provider duties for the modified version.

The four risk tiers

  1. Unacceptable risk (Article 5) — eight prohibited practices, including subliminal or manipulative techniques causing harm, social scoring by public authorities, untargeted scraping for facial-recognition databases, and most real-time remote biometric identification in publicly accessible spaces. Prohibited from 2 February 2025.
  2. High risk (Article 6, Annex III, Annex I) — eight Annex III categories (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice) plus AI used as a safety component of products covered by Annex I sectoral legislation (medical devices, machinery, toys, vehicles, etc.). Article 6(3) carves out an exception when the system performs a narrow procedural task or only improves human review.
  3. Limited risk (Article 50) — transparency duties for chatbots, emotion-recognition, biometric categorisation, deepfakes, and AI-generated text on matters of public interest.
  4. Minimal risk — everything else; no specific obligations under the AI Act.

High-risk obligations (Articles 8–15)

The GPAI track

Articles 51–55 introduce a separate regime for general-purpose AI models. Two tiers: standard GPAI (Article 53) — technical documentation, training-data transparency summary, EU copyright compliance — and GPAI with systemic risk (Article 55), triggered by the 10²⁵ FLOPs training-compute threshold or AI Office designation. Systemic-risk models add model evaluation, adversarial testing, serious-incident reporting, and cybersecurity protection.

The Code of Practice — published by the AI Office — is the practical interpretive layer most providers will follow until harmonised standards are in place.

Where Sia RegAI fits

Sia RegAI ingests the AI Act, the published Codes of Practice, the AI Office guidance, and any national supervisory positions you point it at. It runs the eight-step high-risk decision tree on a system-by-system basis, produces the Annex IV technical-documentation pack, and tracks gaps against the Article 17 quality-management system. Banks deploying AI for credit scoring or insurance pricing get the AI Act mapped alongside their existing prudential framework — the obligation tree is shared so you see overlaps, not silos.

The timeline you can't miss

Related guides

Industry pages

Run the EU AI Act on your own AI portfolio.

A 45-minute walkthrough on a regulation and policy of your choosing. We bring the platform; you keep the output.