The EU AI Act enters full enforcement on August 2, 2026. If you're building or deploying AI agents in the European Union — or serving EU customers — your systems must demonstrate compliance with the world's first comprehensive AI regulation.
This checklist distills the 144-page regulation into actionable steps for engineering and compliance teams. Each item maps to specific articles so your auditors can trace requirements to controls.
Who Needs This Checklist?
You need this checklist if your organization:
- Deploys AI agents that interact with EU citizens or residents
- Uses AI to make or recommend decisions in regulated sectors (finance, healthcare, insurance, HR)
- Develops AI systems that are placed on the EU market, regardless of where you're headquartered
- Operates AI agents that process personal data of EU data subjects
The regulation applies to providers (who build AI systems), deployers (who use them), and importers/distributors equally. There is no safe harbor for "just using" someone else's model.
Step 1: Classify Your AI System's Risk Level
The EU AI Act uses a four-tier risk framework. Your compliance obligations depend entirely on which tier your AI agent falls into.
Unacceptable Risk (Banned)
Articles 5(1)(a)–(h). These AI systems are prohibited outright:
- Social scoring systems by public authorities
- Real-time remote biometric identification in public spaces (with narrow exceptions)
- Emotion recognition in workplaces and education
- Subliminal manipulation or exploitation of vulnerabilities
- Predictive policing based solely on profiling
Action: If your agent does any of the above, stop. No compliance checklist fixes a banned system.
High Risk
Articles 6–7, Annex III. Most enterprise AI agents in regulated sectors fall here:
- Credit scoring and lending decisions
- Insurance pricing and claims assessment
- Recruitment screening and HR decisions
- Access to essential public/private services
- Law enforcement and border control
- Critical infrastructure management
Action: If your agent influences decisions in any of these domains, treat it as high-risk. When in doubt, classify up — the penalties for misclassification are severe.
Limited Risk
Article 50. Systems with transparency obligations:
- Chatbots and conversational AI (must disclose AI interaction)
- Deepfake or synthetic content generators (must label outputs)
- Emotion recognition systems (where not banned)
Minimal Risk
Everything else — spam filters, recommendation engines for non-critical applications, internal productivity tools. Voluntary codes of conduct apply but no mandatory requirements.
Step 2: Implement Transparency Requirements
Transparency applies to all risk tiers above minimal. For high-risk systems, the requirements are extensive.
For All AI Agents (Article 50)
- Users must be informed they are interacting with AI
- AI-generated content must be machine-detectable as such
- Deepfakes must be labeled
For High-Risk AI Agents (Articles 13, 86)
- Provide clear documentation of system capabilities and limitations
- Document the intended purpose with sufficient detail for deployers
- Specify the level of accuracy, robustness, and cybersecurity
- Describe training data characteristics (without revealing trade secrets)
- Explain the logic of automated decision-making in plain language
- Provide instructions for human oversight mechanisms
Step 3: Establish Human Oversight
Article 14. High-risk AI systems must be designed to allow effective human oversight. This is not a checkbox exercise — regulators will test whether your oversight is genuine.
- Designate specific individuals responsible for overseeing each AI agent
- Implement a kill switch — humans must be able to stop the system at any time
- Ensure humans can override or reverse AI decisions
- Design interfaces that make AI outputs interpretable to oversight personnel
- Establish escalation procedures when AI confidence is below threshold
- Log all human overrides with rationale
- Train oversight personnel on system capabilities and failure modes
Critical: "Human-in-the-loop" means a human who understands the output and has the authority and ability to intervene. A rubber-stamp approval process does not satisfy this requirement.
Step 4: Meet Technical Requirements
Articles 9–15. High-risk AI systems must meet specific technical standards.
Data Governance (Article 10)
- Training data must be relevant, representative, and free of errors
- Document data collection, annotation, and preparation processes
- Identify and address potential biases in training datasets
- Maintain data governance policies for ongoing monitoring
Accuracy and Robustness (Article 15)
- Define and document accuracy metrics appropriate to the use case
- Test for adversarial attacks and prompt injection
- Validate outputs against known-correct baselines
- Implement fallback mechanisms for low-confidence scenarios
- Document system behavior under edge cases and failure modes
Cybersecurity (Article 15)
- Protect against data poisoning and model manipulation
- Implement access controls for model weights and training data
- Log all access to AI system components
- Encrypt data at rest and in transit
Record-Keeping (Article 12)
- Maintain automatic logs of all AI system operations
- Logs must enable traceability of system decisions
- Retain logs for the period required by applicable law
- Ensure logs are tamper-evident (SHA-256 hash chains recommended)
Step 5: Prepare for Conformity Assessment
Articles 43–44. High-risk AI systems must undergo a conformity assessment before being placed on the market or put into service.
- Compile technical documentation (Article 11)
- Establish a quality management system (Article 17)
- Conduct internal conformity assessment or engage a notified body
- Register the AI system in the EU database (Article 71)
- Affix CE marking (Article 48)
- Issue an EU declaration of conformity (Article 47)
For most enterprise AI agents, an internal conformity assessment is sufficient (Annex VI). You don't need an external audit unless your system is used for biometric identification or critical infrastructure.
Step 6: Set Up Post-Market Monitoring
Article 72. Compliance is not a one-time event. You must continuously monitor your AI agent in production.
- Establish a post-market monitoring system proportionate to risk
- Monitor for drift in accuracy, fairness, and robustness metrics
- Define thresholds that trigger automatic alerts
- Report serious incidents to national authorities within 15 days
- Update risk assessment when the system or its environment changes
- Maintain incident response procedures for AI-specific failures
What Happens If You Don't Comply
The EU AI Act penalties are among the highest in tech regulation:
| Violation | Maximum Fine |
|---|---|
| Deploying a banned AI system | €35M or 7% of global annual turnover |
| Non-compliance with high-risk requirements | €15M or 3% of global annual turnover |
| Providing incorrect information to authorities | €7.5M or 1% of global annual turnover |
For SMEs and startups, the fines are capped at the lower of the fixed amount or the percentage. But even reduced fines are existential for most companies.
How AgentGate Automates This Checklist
AgentGate was built specifically to automate EU AI Act compliance for AI agents. Here's how our platform maps to this checklist:
| Checklist Step | AgentGate Feature |
|---|---|
| Risk classification | Automatic classification based on domain and agent context |
| Transparency | G8 (ESG & Ethics) gate checks transparency requirements |
| Human oversight | Pass/fail verdicts with override logging and escalation hooks |
| Technical requirements | 8 quality gates covering accuracy, security, bias, and robustness |
| Record-keeping | SHA-256 evidence chains — tamper-evident, auditor-ready |
| Conformity assessment | Downloadable audit packages with complete gate results |
| Post-market monitoring | Continuous validation on every agent output with alerting |
One API call handles steps 2-6 automatically. Your compliance team gets the evidence packages; your engineering team gets the developer experience they expect.
Start free — 100 validations/month, no credit card
Recommended Timeline
If you're starting from zero, here's a realistic timeline:
| Month | Focus |
|---|---|
| Month 1 | Risk classification + inventory of all AI agents |
| Month 2 | Implement transparency docs + human oversight procedures |
| Month 3 | Technical requirements: logging, testing, security hardening |
| Month 4 | Conformity assessment + post-market monitoring setup |
You have 4 months. Start this week.