When will businesses have to comply with the EU's AI Act?
Published on 21st May 2024
Businesses need to be ready for prohibitions and a staggered compliance timetable that takes effect from late 2024
The European Union's (EU) new cross-sector regulation covering artificial intelligence (AI) passed the final legislative hurdle on 21 May when it was approved by the Council of the EU. The text is now settled.
As clarity about dates starts to emerge, which obligations do businesses need to be ready for and by when? It is important for businesses to invest time now in mapping out how and where they are using AI, which obligations will apply to which use cases, what actions will be needed and what the deadlines are for compliance.
Who does the AI Act apply to?
The new law will apply directly to businesses in the 27 Member States of the EU, but – crucially for non-EU businesses including those in the UK and USA – it will also apply to any businesses with customers in the EU. In addition, it will be directly applicable, under the European Economic Area (EEA) arrangements, to businesses in Norway, Iceland and Liechtenstein.
But it will also potentially bite on any business doing business in the EEA or where the outputs of the AI system are used in the EEA. The latter point is important: essentially, an AI system accessible by users in the EEA may be subject to the AI Act, even if it was not intended for EU use.
The definition of an "AI system" in Article 3 of the AI Act is: "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
This definition was intended to align with the Organisation for Economic Co-operation and Development definition of AI but translation into the various EU languages may cause its meaning to extend beyond self-improving machine learning systems – clarity through guidance will be needed.
It is important to note that AI Act compliance is not focused solely on those who build AI systems but may bite on others involved in their operation, distribution or use (i.e. providers, importers, distributors and deployers/users).
Late June/early July 2024: AI Act becomes law
The definitive AI Act text will be published in the Official Journal in the coming weeks and will become binding law 20 days after that. This is currently expected to be late June or early July.
The AI Act has a phased implementation timetable. Exact deadlines will flow from the date of publication, but the rough deadlines are expressed as intervals from that date.
Late 2024: prohibitions
The prohibitions on specified categories of banned AI will come into force six months after the legislation becomes law – in late 2024.
The prohibitions, set out in Article 5 of the AI Act, cover uses of AI that are considered to pose unacceptable risks to health and safety or fundamental rights. Banned applications include:
- AI systems that use subliminal, manipulative or deceptive techniques intended to distort someone's behaviour materially by impairing their decision to make an informed decision and causing them to take a decision that they would not otherwise have taken, with significant harm resulting.
- AI that exploits a person's vulnerabilities such as age, disability, social or economic circumstances in order to distort their behaviour, causing significant harm.
- Social scoring based on behaviour or personal characteristics that results in detrimental treatment of a person in a social context, which is unrelated to where the scoring data was originally gathered, or is unjustified or disproportionate.
- AI systems that create or expand facial-recognition databases through untargeted web-scraping of the internet or CCTV footage.
- Emotion-inference systems used in the workplace or educational settings (unless for medical or safety reasons).
- Biometric categorisation where information such as someone's face or fingerprint is used to deduce sensitive characteristics such as their race, political views, trade union memberships, religious or philosophical beliefs, sex life or sexual orientation (with exceptions for law enforcement).
- AI systems used to predict the likelihood of a person committing a criminal offence, based solely on profiling or an assessment of their personality traits.
- Real-time remote facial recognition systems used in publicly accessible spaces for law enforcement, with exceptions.
As yet, there is no guidance on the interpretation of these provisions, with much that is unclear. However, given the short six month deadline for compliance, businesses need to get on with assessing their risk in this area and developing plans to address any areas of potential non-compliance.
Enforcement authorities will be able to impose fines for breach of these provisions under Article 99(3) of the AI Act of up to €35 million or 7% of worldwide group turnover (whichever is higher).
Summer 2025: general-purpose AI regime
The regime for general-purpose AI essentially covers AI models that perform one core function (for example, AI generating text, speech, images or videos, etc) but which can be integrated into AI systems with a very wide range of functionality.
All providers of general-purpose AI models will be required to meet various transparency obligations (including information about the training data and in respect of copyright) intended to support downstream providers using the model in an AI system to comply with their own AI Act obligations. Note that the general-purpose AI models regime will apply in addition to the core risk-based tiers of AI Act regulation.
There is an additional layer of obligations for general-purpose AI models considered to have "systemic risk" and which have been designated as such by the European Commission. This category is expected to comprise the largest, most state-of-the-art AI models. They will be subject to a further tier of obligations including model evaluation and adversarial testing, assessment and mitigation of systemic risk, monitoring and reporting on serious incidents, adequate cybersecurity, and monitoring and reporting on energy consumption.
Failure to comply with the general-purpose AI provisions will be subject (under Article 101 of the AI Act) to penalties of up to €15 million or 3% of worldwide group turnover (whichever is higher).
Summer 2026: high-risk AI regime
High-risk AI combines two categories: AI that is a component in, or is itself, a product subject to EU product safety regulations (listed in Annex I); and AI that is specifically classified as high risk (listed in Annex III).
The high-risk framework will come into effect in summer 2026 in relation to Annex III systems, which include:
- Permitted remote biometric identification systems.
- AI used for biometric categorisation based on sensitive or protected attributes or characteristics; that is, systems that use sensitive or protected data, not systems that infer that data (which are prohibited).
- Emotion recognition systems (outside the workplace or education institutions, where these systems are prohibited).
- AI systems used as safety components in critical physical and digital infrastructure.
- AI systems for educational or vocational contexts, including determining access to training institutions, learning evaluation systems, educational needs appraisal systems and systems used to monitor behaviour during exams.
- Workplace AI systems for recruitment, applications appraisal, awarding and terminating employment contracts, work task allocation and performance appraisal.
- Credit checks (excluding fraud prevention systems).
- Systems used to assess risk and set prices for life and health insurance.
- Systems for the despatch or prioritisation of emergency services.
- Various public sector applications in relation to benefits eligibility assessment, law enforcement, immigration, asylum, judicial and democratic processes.
There is, however, an important carve-out for AI systems within the Annex III categories but which do "not pose a significant risk of harm to the health, safety or fundamental rights of natural persons". This will be an issue for self-assessment based on specified criteria, which must be documented.
Onerous compliance obligations are imposed (both on providers and deployers) on qualifying high-risk AI systems, requiring a continuous, iterative risk-management system for the full lifecycle of the AI system. Compliance requirements encompass technical documentation, record-keeping, transparency, human oversight, accuracy, robustness and cybersecurity.
There are extensive obligations on the governance and curation of data used to train, validate and test high-risk AI. Such data sets must be relevant, sufficiently representative and, to the best extent possible, reasonably free of errors and complete given the intended purpose. The data should have "appropriate statistical properties" and must reflect, to an appropriate extent, the "specific geographical, contextual, behavioural or functional setting" within which the AI system will be used.
Failure to comply with the high-risk regime is subject (under Article 99(4)) to penalties of up to €15 million or 3% of worldwide group turnover (whichever is higher).
Summer 2026: low-risk transparency obligations
Some AI systems which fall outside the high-risk regime will nevertheless be subject to obligations, mainly relating to transparency. The prime concern is that users must be made aware that they are interacting with an AI system or its outputs, if it is not obvious from the circumstances and context.
These provisions will apply to AI systems such as chatbots, emotion recognition systems and biometric categorisation systems. They will also apply to AI systems generating outputs such as synthetic audio, image, video or text content, and deep fakes (with exceptions for artistic and creative uses). Transparency requirements will also apply to systems producing public interest information for the public.
Failure to comply with the transparency risk regime is subject (under Article 99(4)) to penalties of up to €15 million or 3% of worldwide group turnover (whichever is higher).
Summer 2027: high-risk systems subject to product safety regulation
For AI which is integrated into products subject to the product safety regulations listed in Annex I, the high-risk regime will come into force after three years – summer 2027. The regime (and penalties for non-compliance) are essentially the same as for the Annex III high-risks systems.
What else should businesses be considering?
The AI Act is not a one-stop-shop for legal compliance and risk management around the development and deployment of AI: there are many other areas of law and regulation that may be in play. We offer an overview of the risks that a business should consider.
If you would like to discuss any of the issues raised in this article, please do not hesitate to contact the authors or your usual Osborne Clarke contact.