EU AI Act becomes law on 1 August: what are the compliance deadlines for businesses?
Published on 17th Jul 2024
The phased compliance timetable is now set, with the first deadline in just six months' time
The AI Act was published in the Official Journal of the EU on 12 July 2024, a couple of months later than expected, and will become law on 1 August 2024. Businesses need to start preparing now to ensure compliance at the right time for the different types of AI categorised in the AI Act (see our earlier Insight, which also sets out the potential consequences of failing to comply). The staggered deadlines, which all flow from 1 August 2024 and come into force over the course of three years, are set out below.
Six months: 2 February 2025 – prohibited practices
The ban on certain prohibited AI practices, as set out in Article 5 of the AI Act, will come into effect on 2 February 2025.
The Article 5 prohibitions cover uses of AI that are considered to materially distort human behaviour, potentially leading to significant harms to health, financial interests or fundamental rights. They include AI systems that are manipulative or deceptive, emotion inference systems or systems that exploit a person's vulnerability (among other things).
Under Article 96, the European Commission is obliged to publish guidelines on the interpretation of the Article 5 prohibitions. At present, there is still much that is unclear and there is no deadline for publication of the guidance. However, given the short deadline for compliance, businesses need to start preparing now.
One year: 2 August 2025 – general-purpose AI
The regime for general-purpose AI comes into effect on 2 August 2025. It covers AI models that can perform a wide range of distinct tasks, such as generating text, audio, images or video, and can be integrated into AI systems that have an even wider functionality.
Two years: 2 August 2026 – high-risk AI in Annex III and transparency obligations
There are two categories of high-risk AI: AI that comes within the EU product safety regime (see below); and AI that is specifically classified as high risk (as listed in Annex III). The regime for the latter category of high-risk AI comes into effect on 2 August 2026.
Annex III systems include, among other things, workplace AI systems, AI used for safety in critical infrastructure, and AI used for biometric categorisation based on sensitive data.
Businesses using AI that is not high risk (such as chatbots and systems generating synthetic outputs, such as deep fakes) are subject to transparency obligations and must also comply by 2 August 2026. Essentially, businesses must ensure that users are aware that they are interacting with AI when they use it.
Three years: 2 August 2027 – high-risk AI in Annex I
AI that is integrated into products that are subject to the product safety regulations listed in Annex I will also be categorised as high-risk AI. The regime for this category of high-risk AI regime comes into effect on 2 August 2027.
There is no guidance as yet on either of the high-risk AI regimes, although the Commission is obliged to publish guidelines on high-risk AI serious incident reporting by 2 August 2025 and on determining whether an AI system is high risk by 2 February 2026.
Other considerations
Businesses should remember that the AI Act interacts with other areas of law and regulation, such as consumer, intellectual property, data protection, cyber security, environmental and other laws. As such, the risks from a compliance point of view cannot all be dealt with by complying with the AI Act alone. Our overview of what risks businesses need to consider when using AI explains more.
If you would like to discuss any of the issues raised in this article, please do not hesitate to contact the authors or your usual Osborne Clarke contact.