Blog11 min readBy Sarah Chen

EU AI Act Compliance: What Enterprise Leaders Must Do Before August 2026

The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024 and is the world's first comprehensive legal framework for artificial intelligence. It applies to any organization — inside or outside the EU — that places AI systems on the EU market or whose AI outputs affect EU residents.

Full enforcement begins August 2026. Fines for non-compliance reach €35 million or 7% of global annual turnover for the most serious violations involving prohibited AI practices. For high-risk AI systems, penalties reach €15 million or 3% of global turnover.

This guide covers the risk tier framework, what your organization must do, and a concrete 90-day action plan.


Who the EU AI Act Applies To

The regulation applies broadly:

  • AI providers: Organizations that develop and place AI systems on the EU market (including open-source providers)
  • AI deployers: Organizations that use AI systems in a professional capacity within the EU
  • Importers and distributors: Organizations in the AI supply chain serving EU users
  • Product manufacturers: Companies embedding AI in regulated products (medical devices, vehicles, industrial machinery)

Key principle: The Act follows the product, not the provider's location. A US-based company deploying an AI-powered hiring tool used by its EU employees is subject to the Act.


The 4 Risk Tiers

Tier 1 — Unacceptable Risk (Prohibited)

Effective 2 February 2025 — already in force.

Prohibited practices include:

  • Social scoring by public authorities or private entities for general purposes
  • Real-time remote biometric identification in public spaces for law enforcement (with limited exceptions)
  • Subliminal manipulation — AI techniques that exploit psychological vulnerabilities
  • Exploitation of vulnerable groups — AI targeting children, disabled people, elderly in harmful ways
  • Emotion recognition in workplace and educational institutions
  • Biometric categorization to infer race, political opinions, religious beliefs, sexual orientation

Action: Audit your AI portfolio immediately for any system that could fall into these categories. Prohibited systems must be decommissioned or removed from EU deployment.

Tier 2 — High-Risk AI Systems

Effective August 2026. Subject to the most rigorous requirements.

High-risk systems are defined in Annex III of the Act. Key categories relevant to enterprise:

| Category | Examples | |---|---| | Employment and HR | CV screening, performance monitoring, promotion/termination decisions | | Access to essential services | Credit scoring, insurance risk assessment, benefit eligibility | | Law enforcement | Predictive policing tools, evidence assessment | | Critical infrastructure | AI managing power grids, water systems, financial infrastructure | | Education | Student assessment, admission decisions | | Migration | Document authenticity assessment, risk profiling |

Most enterprise AI agents that make consequential decisions fall into the high-risk category. This includes autonomous agents handling loan approvals, hiring workflows, insurance claims, and patient care routing.

Tier 3 — Limited Risk (Transparency Obligations)

Chatbots, emotion recognition (for permitted uses), AI-generated content, and deepfakes must:

  • Disclose to users that they are interacting with AI
  • Label AI-generated content as synthetic
  • Implement transparency measures for general-purpose AI models

Tier 4 — Minimal Risk

No mandatory requirements beyond voluntary codes of practice. This covers most AI: spam filters, recommendation engines, AI-assisted document drafting.


High-Risk AI: What You Must Implement

If you deploy high-risk AI systems, the following requirements apply from August 2026:

1. Risk Management System (Article 9)

A documented, continuous risk management process covering:

  • Identification and analysis of known and reasonably foreseeable risks
  • Estimation and evaluation of risks arising from intended use and reasonably foreseeable misuse
  • Adoption of suitable risk mitigation measures

Practical implementation: This is where ISO 42001 provides ready-made conformity evidence. An ISO 42001 AIMS satisfies Article 9 requirements in large part, reducing the documentation burden significantly.

2. Data Governance (Article 10)

Training, validation, and testing datasets must:

  • Be subject to appropriate data governance practices
  • Be relevant, sufficiently representative, and free from errors
  • Have documented data provenance and characteristics
  • Be tested for biases that could lead to discriminatory outcomes

Action: Document your RAG knowledge bases and training datasets. Implement data lineage tracking. Conduct demographic bias testing before deployment.

3. Technical Documentation (Article 11)

Before placing a high-risk AI system on the market, prepare:

  • General description of the system and its intended purpose
  • Design specifications, architecture, and training methodology
  • System validation and testing procedures
  • Risk assessment results and mitigation measures
  • Monitoring and logging mechanisms

This documentation must be maintained and updated throughout the system's lifecycle.

4. Record-Keeping and Logging (Article 12)

High-risk AI systems must enable logging of:

  • All actions and decisions taken by the system
  • Inputs and relevant context at time of decision
  • Reference database used (for systems relying on reference data)
  • Sufficient information to enable post-deployment monitoring

Implementation: Immutable WORM (Write Once Read Many) logs as described in AI Agent Security Best Practices satisfy this requirement.

5. Transparency and User Information (Article 13)

Deployers must provide users with:

  • Clear identification that an AI system is involved in decisions affecting them
  • The system's capabilities and limitations
  • The degree of accuracy and reliability
  • Any foreseeable risks to health, safety, or fundamental rights

For autonomous agents: Users who receive consequential decisions (loan denied, application rejected, insurance claim outcome) must be informed that AI was involved and have a right to request human review.

6. Human Oversight (Article 14)

High-risk AI systems must be designed to allow human oversight. Specifically:

  • Humans must be able to understand the system's capabilities and limitations
  • Humans must be able to monitor the AI's operation and detect anomalies
  • Humans must be able to intervene and override the system's outputs
  • Humans must be able to stop the system via a kill switch

This is why Human-in-the-Loop architecture is not just best practice — it is a legal requirement for high-risk AI in the EU.

7. Accuracy, Robustness, Cybersecurity (Article 15)

High-risk systems must be designed to achieve an appropriate level of:

  • Accuracy: With documented accuracy metrics for the intended purpose
  • Robustness: Performance must not degrade under adversarial conditions or data distribution shifts
  • Cybersecurity: Protection against attacks that could change the system's behavior or exploit data

8. Conformity Assessment (Article 43)

Before deployment, high-risk AI systems require a conformity assessment:

  • For most systems: Self-assessment against the requirements above, with technical documentation
  • For biometric identification and critical infrastructure systems: Third-party assessment by a notified body

Post-certification: Annual review and re-assessment when the system is substantially modified.

9. Registration (Article 71)

High-risk AI systems must be registered in the EU AI database before being placed on the market or put into service.


General-Purpose AI (GPAI) Models

The Act introduces specific requirements for General-Purpose AI Models (GPAIMs) — large foundation models that can be used across a wide range of tasks. Relevant to enterprises that:

  • Fine-tune or deploy open-source models (Llama, Mistral, etc.)
  • Build AI systems on top of GPAIMs from third-party providers

GPAI requirements include:

  • Technical documentation describing training methodology
  • Information and documentation for downstream providers
  • Copyright compliance policy
  • Summary of training data

Systemic risk models (those with computing power >10²⁵ FLOPs, currently GPT-4 class and above) face additional requirements including adversarial testing and cybersecurity incident reporting.


Penalties at a Glance

| Violation | Maximum Fine | |---|---| | Prohibited AI practices (Tier 1) | €35M or 7% of global annual turnover | | High-risk AI non-compliance | €15M or 3% of global annual turnover | | Providing incorrect/incomplete information to authorities | €7.5M or 1.5% of global annual turnover |

For SMEs and startups, fines are capped at the lower of the absolute amounts above.


90-Day Enterprise Compliance Action Plan

Days 1–30: Inventory and Classification

Objective: Know what AI you have and what category it falls into.

  • [ ] Conduct enterprise-wide AI system inventory (including shadow AI, departmental tools, vendor AI in SaaS products)
  • [ ] Classify each system against the EU AI Act risk tiers (Prohibited / High-risk / Limited / Minimal)
  • [ ] Identify any systems in or near the Prohibited category — initiate decommissioning
  • [ ] Prioritize high-risk systems for compliance effort
  • [ ] Appoint an EU AI Act Compliance Lead (typically Chief AI Officer or General Counsel)

Days 31–60: High-Risk Gap Assessment

Objective: For each high-risk system, understand what's missing.

  • [ ] Map each high-risk system against the 9 requirements (Articles 9–15, 43, 71)
  • [ ] Assess data governance gaps — is training/inference data documented and bias-tested?
  • [ ] Review logging capabilities — does every consequential decision generate an immutable audit record?
  • [ ] Review human oversight mechanisms — is there a functional kill switch and override capability?
  • [ ] Identify transparency gaps — are users informed when AI is making decisions affecting them?
  • [ ] Engage legal counsel to confirm vendor contracts address GPAI liability and documentation obligations

Days 61–90: Remediation and Preparation

Objective: Close the critical gaps. Prepare conformity assessment documentation.

  • [ ] Implement immutable logging for all high-risk AI systems not already covered
  • [ ] Draft user-facing transparency disclosures for high-risk AI decisions
  • [ ] Establish or update human oversight workflows (escalation thresholds, approval chains)
  • [ ] Begin technical documentation preparation (required before registration)
  • [ ] Engage accredited Notified Body if third-party conformity assessment is required
  • [ ] Register compliant systems in the EU AI database once operational

How ISO 42001 Accelerates EU AI Act Compliance

Organizations with existing ISO 42001 certification have a significant head start on EU AI Act compliance:

| EU AI Act Requirement | ISO 42001 Coverage | |---|---| | Risk management system (Art. 9) | Clause 6.1 + Annex A controls | | Data governance (Art. 10) | Annex A data controls | | Technical documentation (Art. 11) | Clause 7.5 documented information | | Record-keeping (Art. 12) | Clause 9.1 monitoring + records | | Human oversight (Art. 14) | Annex A system lifecycle controls | | Conformity assessment evidence (Art. 43) | ISO 42001 certificate + audit records |

ISO 42001 conformance does not automatically satisfy the EU AI Act — the legal requirements go further in specific areas (especially registration and notified body assessment for highest-risk systems). But it provides a documented, auditable management system that substantially reduces the compliance gap.


Conclusion

The EU AI Act is not a future risk — enforcement begins in August 2026, and organizations that have not completed their AI inventory and risk classification by mid-2026 will face a compressed timeline for high-risk system remediation.

The organizations best positioned for compliance are those that treat the Act as an architecture requirement rather than a legal checkbox. Human-in-the-loop design, immutable audit logging, bias testing, and data governance are not burdens — they are the infrastructure of trustworthy AI that performs reliably over time.

Start your AI inventory this week. Classify before you build. Document as you go.


External Resources


Related Resources

Ready to deploy autonomous AI agents?

Our engineers are available to discuss your specific requirements.

Book a Consultation