The Art of CTO EU AI Act Compliance tool classifies AI system risk levels under the EU AI Act and generates compliance checklists based on the determined risk category.
Frequently Asked Questions
What are the EU AI Act risk categories?
The EU AI Act classifies AI systems into four risk tiers: unacceptable risk (banned outright, including social scoring and real-time biometric surveillance), high risk (subject to strict requirements including conformity assessments, risk management systems, and human oversight — covers AI in hiring, credit scoring, healthcare, and critical infrastructure), limited risk (transparency obligations like disclosing AI-generated content), and minimal risk (no specific requirements). Most enterprise AI applications fall into the high-risk or limited-risk categories.
When does the EU AI Act take effect?
The EU AI Act entered into force in August 2024 with a phased implementation timeline. Prohibited AI practices became enforceable in February 2025, general-purpose AI model obligations apply from August 2025, and high-risk AI system requirements take full effect in August 2026. Companies should begin compliance assessments now, as implementing required technical documentation, risk management systems, and human oversight mechanisms for high-risk systems typically takes 12-18 months.