Understanding the EU AI Act

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is a landmark regulation that establishes the world's first comprehensive legal framework for artificial intelligence. Adopted by the European Parliament in 2024, it aims to ensure AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly.

Key aspects:

Risk-Based Approach

The AI Act categorizes AI systems into four risk levels:

  • Unacceptable Risk: Prohibited AI practices (e.g., social scoring, manipulative AI)
  • High Risk: AI systems in critical areas (e.g., healthcare, law enforcement, employment) requiring strict compliance
  • Limited Risk: AI systems with transparency obligations (e.g., chatbots, deepfakes)
  • Minimal Risk: AI systems with no specific obligations (e.g., AI-enabled video games)

Phased Implementation Timeline

  • August 2, 2025: Prohibitions on unacceptable AI practices take effect
  • August 2, 2026: Obligations for general-purpose AI models take effect
  • August 2, 2027: Full compliance required for high-risk AI systems
  • August 2, 2030: Obligations for certain high-risk AI systems in existing products

Who is affected?

  • Providers: Organizations developing or placing AI systems on the EU market
  • Deployers: Organizations using AI systems in the EU
  • Distributors & Importers: Organizations making AI systems available in the EU
  • Product Manufacturers: Organizations integrating AI into their products

Prohibited AI Practices

Under the EU AI Act, certain AI practices are entirely banned due to their risk of harm to fundamental rights and safety. These practices are classified as 'unacceptable risk' and cannot be used in the EU under any circumstances.

  • Social Scoring: AI systems that rate or score individuals based on behavior, with social consequences (e.g., credit or employment discrimination)
  • Real-time Biometric Identification: Live facial recognition in public spaces for law enforcement, except in limited cases (e.g., missing persons, national security)
  • Manipulative AI: AI designed to manipulate behavior in ways that harm individuals (e.g., exploiting vulnerabilities of specific groups like children)
  • Remote Biometric Categorization: Systems that categorize people by protected characteristics (race, ethnicity, gender, sexual orientation, political beliefs) without consent

High-Risk AI Systems

High-risk AI systems are those that could significantly harm people's rights, freedoms, safety, or wellbeing. These systems require comprehensive compliance measures including risk assessments, transparency, human oversight, and extensive documentation.

  • Employment & Worker Management: AI systems used for recruitment, selection, promotion, termination, or performance monitoring
  • Education & Training: AI systems determining admission or course selection in educational institutions
  • Critical Infrastructure: AI systems that could compromise safety-critical operations (energy, water, transportation, utilities)
  • Law Enforcement & Justice: AI systems used for detecting, investigating, or prosecuting crimes; assessing risk; or determining bail/sentencing
  • Border Control & Immigration: Automated systems for visa decisions, border crossing, and asylum determinations

Compliance Requirements for High-Risk Systems: Organizations deploying high-risk AI systems must conduct risk assessments, maintain detailed documentation, implement human oversight mechanisms, establish quality assurance processes, provide transparency notices to affected parties, and implement mechanisms for reporting and addressing harms.