Understanding the EU AI Act
The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is a landmark regulation that establishes the world's first comprehensive legal framework for artificial intelligence. Adopted by the European Parliament in 2024, it aims to ensure AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly.
Key aspects:
Risk-Based Approach
The AI Act categorizes AI systems into four risk levels:
- • Unacceptable Risk: Prohibited AI practices (e.g., social scoring, manipulative AI)
- • High Risk: AI systems in critical areas (e.g., healthcare, law enforcement, employment) requiring strict compliance
- • Limited Risk: AI systems with transparency obligations (e.g., chatbots, deepfakes)
- • Minimal Risk: AI systems with no specific obligations (e.g., AI-enabled video games)
Phased Implementation Timeline
- • August 2, 2025: Prohibitions on unacceptable AI practices take effect
- • August 2, 2026: Obligations for general-purpose AI models take effect
- • August 2, 2027: Full compliance required for high-risk AI systems
- • August 2, 2030: Obligations for certain high-risk AI systems in existing products
Who is affected?
- • Providers: Organizations developing or placing AI systems on the EU market
- • Deployers: Organizations using AI systems in the EU
- • Distributors & Importers: Organizations making AI systems available in the EU
- • Product Manufacturers: Organizations integrating AI into their products