AI Governance10 min readBy Priya Nair

Quick Answer

A comprehensive overview of AI regulation globally — the EU AI Act, US executive orders, China's AI rules, and what enterprises need to do to prepare for compliance.

AI Regulation: Global Landscape and What's Coming

AI regulation has moved from academic debate to enacted law. The EU AI Act, the world's most comprehensive AI regulatory framework, began enforcement in 2024. The US has issued multiple executive orders and sector-specific guidance. China has enacted specific rules for generative AI and recommendation systems. More is coming.

Enterprises that wait for the regulatory environment to "settle" before addressing compliance are taking on significant risk. The time to build compliance capabilities is now.


The EU AI Act: The Global Benchmark

The EU AI Act is the most comprehensive AI regulation enacted to date and is setting the global standard that other jurisdictions are watching closely.

Risk-Based Framework

The Act classifies AI systems into four risk tiers:

Unacceptable risk (Prohibited):

  • Social scoring by governments
  • Real-time biometric identification in public spaces (with narrow exceptions)
  • AI that exploits psychological vulnerabilities
  • Emotion recognition in workplaces and schools
  • AI systems that manipulate behavior subliminally

High risk (Strictly regulated):

  • Employment and worker management (hiring tools, performance monitoring)
  • Credit scoring and financial risk assessment
  • Healthcare AI (diagnosis, treatment recommendation)
  • Critical infrastructure management
  • Law enforcement and border control
  • Education and vocational training assessment

Limited risk (Transparency requirements):

  • AI chatbots (must disclose they're AI)
  • AI-generated content (must be labeled)
  • Emotion recognition systems

Minimal risk (No specific requirements):

  • AI-powered spam filters, recommendation systems, etc.

High-Risk AI Requirements

For high-risk AI systems, the EU AI Act requires:

  • Risk management system in place throughout the AI lifecycle
  • High-quality training, validation, and testing datasets
  • Technical documentation and logging
  • Transparency and provision of information to users
  • Human oversight measures
  • Accuracy, robustness, and cybersecurity

Conformity assessment before placing the system on the EU market. Self-assessment for most categories; third-party assessment for specific high-risk areas.

Timeline

  • February 2025: Prohibited AI provisions enforced
  • August 2025: General-purpose AI model provisions enforced
  • August 2026: High-risk AI provisions fully enforced
  • 2027-2030: Extended timelines for specific categories

United States: Sector-Specific Approach

The US has taken a sector-specific approach rather than comprehensive AI legislation:

Executive Order on AI (October 2023): Required safety testing for powerful AI models, required federal agencies to assess AI risk, initiated sectoral AI guidelines across 17 federal agencies.

NIST AI Risk Management Framework (2023): Voluntary framework that has become the de facto US standard. Required for federal agencies; referenced by sector regulators.

Sector-specific regulation:

  • Financial services: OCC, FDIC, and Federal Reserve guidance on AI risk in banking; SEC guidance on algorithmic trading; CFPB enforcement of existing fair lending laws against AI-driven credit decisions
  • Healthcare: FDA guidance on AI-enabled medical devices; OCR guidance on AI and HIPAA compliance
  • Employment: EEOC guidance on AI and employment discrimination; NYC Local Law 144 requiring bias audits

State regulations: Over 15 states have enacted or introduced AI legislation, creating a patchwork that enterprises must navigate.


China: Specific Rules for Specific Technologies

China has taken a more targeted approach, regulating specific AI applications:

Generative AI Measures (2023): Registration requirements for public-facing generative AI services, content moderation obligations, data source transparency.

Recommendation Algorithm Regulation: Rules governing AI recommendation systems used by major platforms.

Deep Synthesis Regulation: Rules governing synthetic media (deepfakes), requiring labeling and limiting certain use cases.

China's approach is notable for its speed of implementation and its focus on specific applications rather than comprehensive framework legislation.


Other Key Jurisdictions

UK: Post-Brexit, pursuing a more principles-based, sector-specific approach. No comprehensive AI Act but sector regulators (FCA, ICO, CMA) applying existing rules to AI.

Canada: Proposed Artificial Intelligence and Data Act (AIDA) moving through Parliament. Similar risk-based approach to EU AI Act.

Brazil: Draft AI law following EU AI Act model under consideration.

India: Softer touch initially; monitoring global developments before enacting comprehensive legislation.


What Enterprises Need to Do

Immediate Actions (This Year)

1. Inventory all AI systems: Classify each by EU AI Act risk tier if you operate in the EU. Identify high-risk systems that require compliance attention.

2. Assess high-risk AI systems: For each high-risk system, assess against EU AI Act requirements. Gap analysis will reveal where investment is needed.

3. Implement transparency for chatbots: All AI chatbots interacting with EU consumers must disclose they're AI. Simple but often overlooked.

4. Engage legal and compliance: If you haven't already, engage your legal team and compliance function. AI regulatory compliance is not just a technology problem.


Medium-Term (12-18 Months)

Documentation: Technical documentation for all high-risk AI systems. This requires understanding your AI systems at a level many organizations currently lack.

Data governance: The EU AI Act's data quality requirements for high-risk AI will require investment in training data governance.

Human oversight mechanisms: Design and implement meaningful human oversight for all high-risk AI systems. "Meaningful" is defined — a rubber stamp approval is not compliant.

Conformity assessment: For high-risk systems, prepare for conformity assessment. For most categories, this is self-assessment — but requires rigorous documentation.


Compliance Risk Assessment

The penalties for EU AI Act violations are substantial:

  • Prohibited AI violations: Up to 35M EUR or 7% of global turnover
  • High-risk AI violations: Up to 15M EUR or 3% of global turnover
  • Providing incorrect information: Up to 7.5M EUR or 1% of global turnover

For large enterprises, the 3-7% of global turnover figures represent enormous potential exposure.


Conclusion

AI regulation is no longer hypothetical — it is enacted and being enforced. Enterprises that build compliance capabilities now will be better positioned than those scrambling to retrofit systems after enforcement begins.

The investment required to comply with the EU AI Act and related regulations is not trivial. But it is far less than the cost of non-compliance penalties, reputational damage, or having to pull AI systems from the market.


Related Reading

Ready to deploy autonomous AI agents?

Our engineers are available to discuss your specific requirements.

Book a Consultation