EU AI Act Liability Risk Checker 2025

Identify your AI system's risk category under the EU AI Act (Regulation EU 2024/1689) and understand your compliance obligations. Select your use case to instantly see whether your system is prohibited, high-risk, limited-risk, or minimal-risk — plus applicable penalties and required documentation. Based on EU AI Act text as of March 2025. All checks run privately in your browser.

Describe Your AI System

Ad Space

EU AI Act — Risk Categories and Timeline

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive AI regulation, entering into force on 1 August 2024. It establishes a risk-based framework with four categories: prohibited AI (banned outright), high-risk AI (subject to conformity assessments, documentation, and oversight), limited-risk AI (transparency obligations only), and minimal-risk AI (voluntary codes of conduct). The Act applies to providers placing AI systems on the EU market, deployers using AI in the EU, and providers outside the EU whose systems have outputs used in the EU — making it effectively global in scope, similar to GDPR. Based on the official EU AI Act text (EUR-Lex 32024R1689, March 2025).

Key enforcement dates: Prohibited AI rules and literacy obligations applied from 2 February 2025. General Purpose AI model obligations apply from 2 August 2025. High-risk AI obligations for Annex I systems (products under sector-specific legislation) apply from 2 August 2026. Obligations for high-risk AI in Annex III (standalone software) apply from 2 August 2026. Full application of all provisions completed by 2 August 2027.

High-Risk AI: What Compliance Requires

High-risk AI systems under Annex III of the EU AI Act include: AI used in recruitment and HR decisions, credit scoring and insurance underwriting, educational assessment, medical devices (class IIa, IIb, III), critical infrastructure management, law enforcement risk assessment, and biometric categorisation. Providers of these systems must implement a risk management system, use high-quality training data with documented data governance, maintain technical documentation, enable human oversight, achieve and maintain appropriate levels of accuracy, and register the system in the EU AI Act database before market placement. Deployers must ensure operators are trained, conduct fundamental rights impact assessments (for public bodies and private entities in certain contexts), and monitor system performance in use.

Penalties and Enforcement Mechanism

The EU AI Act penalties are among the highest in global regulation, designed to deter non-compliance. Violations involving prohibited AI practices carry fines up to €35 million or 7% of total global annual turnover, whichever is greater. Non-compliance with high-risk AI obligations and GPAI rules carries fines up to €15 million or 3% of turnover. Providing inaccurate or misleading information to national competent authorities carries fines up to €7.5 million or 1.5% of turnover. Enforcement is carried out by national market surveillance authorities and, for GPAI models, the EU AI Office under the European Commission. Member states must designate their national supervisory authority by August 2025. Last updated: March 2026.