EU AI Act high-risk classification — a decision tree for AI builders.
"Is our AI system high-risk under the EU AI Act?" is the question every AI vendor and large enterprise wants a five-minute answer to. The Act is structured in a way that almost lets you give one — if you read it in the right order. Here is the decision tree we run with clients, with the gotchas regulators have already flagged.
Companion piece to our EU AI Act for banks guide. That one is for banking-specific obligations. This one is for any AI builder — vendor, deployer, importer, distributor — trying to classify their systems before the August 2026 compliance milestone.
The classification matters more than the answer
Before the decision tree, the meta-point: a defensible classification with a documented rationale is better than the "right" answer with no paper trail. Regulators don't grade classifications in isolation; they grade your reasoning. A system you classified as not-high-risk with three pages of rationale is much safer than the same system classified as not-high-risk with a one-line memo. The tree below produces classifications and forces the rationale at each step.
Step 1 — Is it AI under the Act's definition?
Article 3(1) defines an AI system as "a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions, that can influence physical or virtual environments."
Practical filter:
- Yes, AI: ML models, LLMs, computer vision, NLP, recommender systems, decision-support systems with learned components.
- Probably not: rules-based systems where every output is deterministically computed by hand-written code (a tax calculator, a static scorecard, a rules engine).
- Edge cases that lawyers argue about: heuristic search, optimization solvers, statistical models that aren't ML in any modern sense. The Commission has signaled a broad reading; assume in-scope unless your counsel disagrees with rationale.
Step 2 — Is it prohibited under Article 5?
If yes, stop. The system can't be placed on the EU market or used by EU operators. Prohibited practices include:
- Subliminal or purposefully manipulative techniques causing significant harm.
- Exploiting vulnerabilities (age, disability, social or economic situation) causing significant harm.
- Social scoring by public authorities or in the public interest, where the score causes detrimental treatment in unrelated contexts.
- Predicting criminal offences based purely on profiling.
- Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases.
- Emotion recognition in workplace and education (with narrow medical/safety exceptions).
- Biometric categorisation inferring sensitive attributes (with narrow law-enforcement exceptions).
- Real-time remote biometric identification in publicly accessible spaces by law enforcement (with narrow exceptions).
For most commercial AI, Article 5 isn't a concern. For AI products in employment, education, public services, security, or content moderation — read it carefully.
Step 3 — Is it Annex III high-risk?
Annex III lists eight high-risk use-case areas. If your AI system falls into one, it's high-risk by classification (with a narrow Article 6(3) exception we'll cover next).
The eight areas, with the most-litigated examples for tech companies:
- Biometrics — remote biometric identification, biometric categorisation, emotion recognition.
- Critical infrastructure — safety components in road traffic, water, gas, heating, electricity.
- Education and vocational training — admissions, evaluation of learning outcomes, monitoring during tests, allocation to programs.
- Employment, workers management, access to self-employment — recruitment, advertisement targeting for job ads, candidate filtering, performance evaluation, promotion / termination decisions, work allocation, monitoring.
- Access to essential private and public services and benefits — public-benefit eligibility, credit-scoring of natural persons, life and health insurance risk assessment / pricing, emergency services dispatch / triage.
- Law enforcement — risk assessment for individuals, polygraph and similar, evidence reliability, predictive policing in narrow conditions, profiling.
- Migration, asylum, and border control management — polygraph and similar, risk assessment of migration / security, applications for asylum / visas / residence, detection / recognition / identification.
- Administration of justice and democratic processes — judicial-decision assistance, alternative dispute resolution, influencing election outcomes through targeted information.
For tech-company AI, the most common landings are area 4 (employment), area 5 (essential services / credit / insurance), and indirectly area 1 (biometrics for identity verification).
Step 4 — Does Article 6(3) apply?
Article 6(3) carves out an exception: even if a system falls in an Annex III area, it's NOT high-risk if it does not pose a "significant risk of harm to the health, safety, or fundamental rights of natural persons." The exception applies in four cases:
- (a) The system is intended to perform a narrow procedural task.
- (b) The system is intended to improve the result of a previously completed human activity.
- (c) The system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review.
- (d) The system is intended to perform a preparatory task to an assessment relevant for the use cases listed in Annex III.
The 6(3) exception sounds generous. In practice it's narrow, because the AI Office has signaled that providers carrying the exception have to document why each criterion applies. And if the system performs profiling of natural persons, the exception doesn't apply, period.
The right way to use 6(3): only when you can write a one-page memo justifying it, and when you would happily produce that memo to a regulator. If you can't, treat the system as high-risk.
Step 5 — Annex I product-safety integration
Separately from Annex III, an AI system is also high-risk if it is a safety component of a product covered by EU product-safety legislation listed in Annex I, OR is itself such a product, OR is required to undergo a third-party conformity assessment under that legislation.
Annex I covers a wide range: machinery, toys, recreational craft, lifts, pressure equipment, radio equipment, in-vitro diagnostic medical devices, civil aviation, motor vehicles, marine equipment, rail, agricultural and forestry vehicles, etc. If you build AI for these domains, you're in. The AI Act sits on top of the existing product-safety regime.
Step 6 — General-Purpose AI (GPAI) — separate track
If your system is a general-purpose AI model (LLM, foundation model, multimodal model), Title VIII applies separately from the high-risk track. Two tiers:
- All GPAI providers — technical documentation, downstream provider information, copyright policy, training-data summary.
- GPAI with systemic risk — additional obligations: model evaluation, adversarial testing, serious incident tracking, cybersecurity for the model and physical infrastructure. Triggered when the model exceeds 10²⁵ FLOPs of training compute or is otherwise designated by the Commission.
If you build on top of a GPAI rather than provide one, you may have downstream obligations as a "deployer" but you don't pick up GPAI-provider obligations. The provider does.
Step 7 — Limited-risk transparency (Article 50)
If the system isn't prohibited and isn't high-risk, but interacts with humans, generates content, or performs emotion recognition / biometric categorisation in any non-prohibited context, transparency obligations apply:
- Disclose to users that they are interacting with an AI.
- Disclose AI-generated or manipulated content (deepfakes, generated text in public-interest contexts).
- Disclose emotion recognition / biometric categorisation when applicable.
These are usually UI/UX changes plus a privacy-notice update, not a heavy compliance program.
Step 8 — Minimal risk (everything else)
If you've fallen through to Step 8, the system is unregulated under the AI Act. The Act encourages voluntary codes of conduct (Article 95) but imposes no obligations. Internal AI governance is still a good idea — see our NIST AI RMF for tech guide for a 30-day onboarding plan.
What to do with the classification
For each AI system in scope, produce a one-page classification memo:
- System name, intended purpose, intended users.
- Step-by-step rationale through the tree above (with citation to specific articles).
- Final classification: prohibited / high-risk Annex III / high-risk Annex I / GPAI / limited / minimal.
- If high-risk under Annex III, the specific area (3.5(b), 3.4, etc.).
- If 6(3) exception applied, the rationale per criterion plus confirmation that no profiling occurs.
- Date, classifier (named human), reviewer (named human), legal sign-off.
This memo is the artifact you hand to internal audit, an EU-AI-Act notified body (for high-risk), or a regulator. It's also the seed document RegAI's agent uses to scaffold the downstream Annex IV documentation if the system lands as high-risk.
Where RegAI helps
RegAI ingests the AI Act, the published delegated and implementing acts, the AI Office's Q&As, and the harmonised standards (e.g., the AI Act-aligned ISO 42001 mapping). For each AI system you submit:
- Walks the decision tree with you, citing specific articles for each branch.
- Produces the classification memo with the rationale already drafted.
- If the result is high-risk, scaffolds the Annex IV technical documentation, the risk-management system (Article 9), the data governance documentation (Article 10), and the post-market monitoring plan (Article 72).
- Cross-maps each obligation to NIST AI RMF and ISO 42001 so the same documentation effort serves both regimes.
For an AI vendor with 30 in-scope systems, that's the difference between a one-FTE-year compliance program and a one-quarter sprint.
Common pitfalls
- Stopping at Annex III without checking Annex I. Product-safety legislation captures a lot of AI that builders don't think of as "high-risk."
- Over-using 6(3). The exception is narrow. If the system involves profiling natural persons, don't claim it.
- Forgetting deployer obligations. If you deploy someone else's high-risk AI, you have your own Article 26 obligations (use logs, monitoring, human oversight). Classification of the system as high-risk doesn't off-ramp you.
- Treating GPAI as separate from product AI. If you build a product on top of a GPAI, both tracks apply: GPAI to the underlying model (handled by the provider), product to your specific use of it.
- Single-shot classification. An AI system's intended purpose can change. Re-classify on material changes; document the trigger.
Closing
The EU AI Act looks intimidating because it's long and the penalties are real. The classification framework underneath, run as a tree, is tractable. The mistake most AI builders make is treating it as a single legal question; the win is treating it as a triage workflow that produces defensible memos for each system.
Get the tree right, document the rationale, repeat for every system. The compliance program flows from the classifications.
