EU AI Act Risk Classification: Is Your AI System High-Risk?

The EU AI Act introduced a four-tier risk classification system that determines your compliance obligations. Get the classification wrong and you either over-engineer compliance for a low-risk tool or, more dangerously, under-engineer it for a high-risk one. This guide walks through the classification logic, the specific high-risk use cases named in the Act, and what each tier requires from you.

The Act entered full application in August 2026. If you are shipping AI in or to the EU and have not classified your system yet, this is urgent.

The Four Risk Tiers

Prohibited AI (Article 5)

These systems are banned outright. No compliance path exists — you simply cannot deploy them in the EU. The prohibited list includes:

  • Subliminal manipulation that bypasses conscious awareness
  • Exploitation of vulnerabilities of specific groups (children, people with disabilities)
  • Social scoring by public authorities
  • Real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions)
  • Emotion recognition in workplaces and educational institutions (with narrow exceptions)
  • AI used to infer sensitive characteristics (race, religion, political views) from biometric data

If your system touches any of these, stop. There is no compliance workaround.

High-Risk AI (Articles 6-51 + Annexes III)

This is the tier with the heaviest compliance burden. High-risk systems require conformity assessment, registration in the EU database, ongoing monitoring, and in many cases a designated EU-based representative. The Act names high-risk systems in two ways: by product category (Annex II) and by application area (Annex III).

Limited-Risk AI (Articles 52-53)

Limited-risk systems have specific transparency obligations. Chatbots must disclose they are AI. Deepfake content must be labelled. Emotion recognition systems must inform users. The obligations are targeted and manageable.

Minimal-Risk AI

Spam filters, AI-powered video games, inventory optimization — these carry no mandatory obligations under the Act, though providers are encouraged to follow voluntary codes of conduct.

The High-Risk Use Case List (Annex III)

This is where most enterprise AI systems need to look carefully. Annex III names eight application areas that are automatically high-risk regardless of the specific technology used:

1. Biometric Identification and Categorisation

Systems that identify or categorise natural persons based on biometric data, or that infer sensitive attributes from biometrics. Includes real-time and post-hoc identification.

2. Critical Infrastructure

AI used as a safety component in the management of critical infrastructure including road traffic, water supply, gas, heating, and electricity. If your agent monitors or controls infrastructure, it is high-risk.

3. Education and Vocational Training

Systems that determine access to education, assess students, or evaluate learning outcomes. AI-powered admissions scoring, exam proctoring, and learning analytics that affect progression all fall here.

4. Employment, Worker Management, and Access to Self-Employment

This is the widest-reaching category for enterprise AI. It covers CV screening, job interview analysis, task allocation and monitoring, and performance evaluation. If your HR system uses AI to rank, filter, or score candidates or employees, it is high-risk.

5. Access to and Enjoyment of Essential Private Services and Benefits

This is the critical fintech category. It explicitly covers AI used in creditworthiness assessment, credit scoring, and life and health insurance risk assessment. If your AI agent makes or influences a credit, lending, or insurance decision, it is high-risk under the Act.

6. Law Enforcement

Polygraph systems, risk assessment tools, crime analytics, and any AI used in criminal investigations. Not typically relevant to fintech builders, but critical for govtech.

7. Migration, Asylum, and Border Control

Risk assessment of applicants, document verification, examination tools for migration authorities.

8. Administration of Justice and Democratic Processes

AI assisting courts, influencing elections, or administering referendums.

The Compliance Obligations for High-Risk AI

If your system is high-risk, you must satisfy all of the following before placing it on the EU market:

  • Risk management system — A documented, ongoing process for identifying and mitigating risks throughout the system lifecycle (Article 9)
  • Data governance — Training, validation, and test datasets must meet relevance, representativeness, and bias mitigation requirements (Article 10)
  • Technical documentation — Comprehensive documentation of the system's design, development, and operation (Article 11)
  • Automatic logging — Logs sufficient to ensure traceability of outputs and detection of risks (Article 12)
  • Transparency to deployers — Documentation enabling deployers to understand the system's intended purpose and limitations (Article 13)
  • Human oversight — Technical measures ensuring effective human oversight, including the ability to halt the system (Article 14)
  • Accuracy, robustness, cybersecurity — Measurable performance standards and resilience to adversarial inputs (Article 15)
  • Conformity assessment — Self-assessment for most categories; third-party assessment for biometric and critical infrastructure systems
  • Registration — Registration in the EU's public AI database before deployment

How to Classify Your System: A Decision Tree

Walk through these questions in order:

  1. Does my system match any item in the Prohibited AI list? If yes: do not deploy.
  2. Is my system a product governed by Annex II (machinery, toys, medical devices, aviation, etc.)? If yes and AI is a safety component: high-risk.
  3. Does my system's application match any of the eight Annex III categories? If yes: high-risk (unless the Article 6(3) exemption applies — narrow-purpose tools that do not influence substantive decisions may qualify).
  4. Does my system interact with users who may not know they are talking to AI? If yes: limited-risk transparency obligations apply.
  5. None of the above? Minimal-risk. Voluntary codes of conduct apply.

Using AgentGate for High-Risk Compliance

High-risk AI compliance requires automatic logging that satisfies Article 12. AgentGate's audit module generates compliant event logs automatically for every agent invocation:

// Configure AgentGate for EU AI Act Article 12 logging
const gate = new AgentGate({
  apiKey: process.env.AGENGATE_API_KEY,
  compliance: {
    eu_ai_act: {
      enabled: true,
      risk_class: 'high',
      annex_iii_category: 'creditworthiness_assessment',
      log_retention_years: 10
    }
  }
});

// All subsequent agent calls are automatically logged to the required standard
const result = await gate.agent.invoke({
  agent_id: 'credit-scoring-v3',
  input: applicantData,
  subject_id: applicant.pseudonymous_id
});

AgentGate also generates the technical documentation artifacts required under Article 11, including model cards, performance benchmarks, and data governance summaries. See the EU AI Act compliance module for the full feature set.

The Cost of Getting Classification Wrong

Penalties under the EU AI Act scale with violation severity. Deploying a prohibited AI system: up to 35 million EUR or 7% of global annual turnover. Violating high-risk obligations: up to 15 million EUR or 3% of global annual turnover. Providing incorrect information to authorities: up to 7.5 million EUR or 1.5% of global annual turnover. These numbers are not hypothetical — the first enforcement actions under the Act began in Q1 2026.

Know your risk class. Build compliant from day one.

AgentGate automates the logging, documentation, and monitoring requirements for high-risk AI systems under the EU AI Act. Start your free trial and classify your system in minutes.

Start free | EU AI Act module docs | See pricing