EU AI Act: Regulatory Framework
Overview
The EU AI Act is the first-ever comprehensive legal framework on AI worldwide, designed to address the risks of AI while positioning Europe to play a leading role globally. The regulation (EU) 2024/1689 lays down harmonized rules on artificial intelligence with the aim of fostering trustworthy AI in Europe.
Risk-Based Approach
The AI Act defines four levels of risk for AI systems:
1. Unacceptable Risk
AI systems considered a clear threat to safety, livelihoods, and rights of people are banned. The AI Act prohibits eight specific practices:
- Harmful AI-based manipulation and deception
- Harmful AI-based exploitation of vulnerabilities
- Social scoring
- Individual criminal offense risk assessment or prediction
- Untargeted scraping of internet or CCTV material to create facial recognition databases
- Emotion recognition in workplaces and education institutions
- Biometric categorization to deduce certain protected characteristics
- Real-time remote biometric identification for law enforcement in publicly accessible spaces
2. High Risk
AI use cases that can pose serious risks to health, safety, or fundamental rights are classified as high-risk, including:
- AI safety components in critical infrastructures
- AI solutions used in education institutions
- AI-based safety components of products
- AI tools for employment and worker management
- AI systems for access to essential private and public services
- Remote biometric identification, emotion recognition, and biometric categorization
- AI use-cases in law enforcement
- AI use-cases in migration, asylum, and border control
- AI solutions used in administration of justice and democratic processes
High-risk AI systems are subject to strict obligations:
- Adequate risk assessment and mitigation systems
- High-quality datasets to minimize discriminatory outcomes
- Logging of activity to ensure traceability
- Detailed documentation
- Clear information to the deployer
- Appropriate human oversight
- High level of robustness, cybersecurity, and accuracy
3. Transparency Risk
AI systems requiring transparency include:
- Chatbots (humans should know they’re interacting with a machine)
- Generative AI (content must be identifiable)
- Deep fakes (must be clearly labeled)
- AI-generated text on matters of public interest (must be labeled)
4. Minimal or No Risk
The vast majority of AI systems currently used in the EU fall into this category, including applications such as AI-enabled video games or spam filters. The AI Act does not introduce specific rules for these systems.
General-Purpose AI Models
The AI Act puts in place rules for providers of general-purpose AI models that could carry systemic risks:
- Transparency and copyright-related rules
- Risk assessment and mitigation requirements for models with potential systemic risks
Governance and Implementation
The European AI Office and authorities of the Member States are responsible for implementing, supervising, and enforcing the AI Act. The governance structure includes:
- AI Board
- Scientific Panel
- Advisory Forum
Implementation Timeline
The AI Act entered into force on August 1, 2024, with a phased implementation:
- Prohibitions and AI literacy obligations: February 2, 2025
- Governance rules and obligations for general-purpose AI models: August 2, 2025
- Full application: August 2, 2026
- Extended transition period for high-risk AI systems embedded into regulated products: August 2, 2027
AI Pact
To facilitate the transition to the new regulatory framework, the Commission has launched the AI Pact, a voluntary initiative that:
- Supports future implementation
- Engages with stakeholders
- Invites AI providers and deployers to comply with key obligations ahead of time
Source
- European Commission: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai