What is the EU AI Act?
Quick Answer
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It entered into force on 1 August 2024 and applies to any organization — inside or outside the EU — that places AI systems on the EU market or whose AI outputs affect EU residents. The Act classifies AI systems into four risk tiers with graduated requirements, with full enforcement beginning August 2026. Non-compliance fines reach €35 million or 7% of global annual turnover for the most serious violations.
Who Does the EU AI Act Apply To?
The regulation applies broadly to:
- AI providers: Organizations that develop and place AI systems on the EU market
- AI deployers: Organizations that use AI systems professionally within the EU
- Importers and distributors: Organizations in the AI supply chain serving EU users
- Product manufacturers: Companies embedding AI in regulated products
Key principle: The Act follows the product, not the provider's location. A US-based company deploying an AI-powered hiring tool used by its EU employees is subject to the Act.
The 4 Risk Tiers
Tier 1 — Unacceptable Risk (Prohibited)
Effective 2 February 2025 — already in force. Prohibited practices include:
- Social scoring by public or private entities for general purposes
- Real-time remote biometric identification in public spaces
- Subliminal manipulation exploiting psychological vulnerabilities
- Emotion recognition in workplaces and educational institutions
- Biometric categorization to infer race, religion, or sexual orientation
Tier 2 — High-Risk AI
Effective August 2026. Covers AI systems in employment (CV screening, performance monitoring), financial services (credit scoring, insurance), healthcare, education, and critical infrastructure. Most enterprise AI agents that make consequential decisions fall into this category.
Tier 3 — Limited Risk
Chatbots and AI-generated content must disclose to users they are interacting with AI and label synthetic content.
Tier 4 — Minimal Risk
No mandatory requirements. Covers most AI: spam filters, recommendation engines, document drafting assistants.
What High-Risk AI Systems Must Implement by August 2026
Organizations deploying high-risk AI systems must implement:
| Requirement | EU AI Act Article | |---|---| | Documented risk management system | Article 9 | | Data governance for training/inference data | Article 10 | | Technical documentation | Article 11 | | Immutable audit logging of all AI decisions | Article 12 | | User transparency disclosures | Article 13 | | Human oversight with kill switch capability | Article 14 | | Accuracy, robustness, and cybersecurity | Article 15 | | Conformity assessment (self or third-party) | Article 43 | | Registration in EU AI database | Article 71 |
Key Dates
| Date | Event | |---|---| | 1 August 2024 | EU AI Act enters into force | | 2 February 2025 | Prohibited practices banned (Tier 1) | | August 2025 | GPAI model requirements effective | | 2 August 2026 | Full enforcement begins — all high-risk AI requirements apply |
Penalties
| Violation | Maximum Fine | |---|---| | Prohibited AI practices | €35M or 7% of global turnover | | High-risk AI non-compliance | €15M or 3% of global turnover | | Providing incorrect information | €7.5M or 1.5% of global turnover |
What Should Enterprises Do Now?
- Audit your AI inventory — catalog all AI systems in use, including vendor AI embedded in SaaS products
- Classify each system against the four risk tiers
- Decommission any systems falling into the Prohibited category
- Gap-assess each high-risk system against Articles 9–15, 43, and 71
- Implement missing controls: logging, human oversight, transparency disclosures
- Prepare technical documentation before August 2026
Related Resources
Ready to get started?
Our engineers are available to discuss your specific requirements.
Book a Consultation