Artificial Intelligence | UK Regulatory Outlook January 2025
Published on 13th Jan 2025
UK's AI Safety bill | AI and intellectual property law consultation deadline | Generative AI and data protection | EU AI Act
UK's AI Safety bill
A key event this year should be the arrival of the UK's first artificial intelligence (AI) legislation, signalling a shift in approach from the Labour government towards AI regulation. It contrasts with that of the previous government, which did not intend to legislate specifically on AI, preferring to rely on various existing regulations and regulators, such as the Information Commissioner's Office (ICO) and the Competition and Markets Authority (CMA).
The new government has indicated that its focus will be on AI safety, in particular the development of the most powerful cutting edge AI models (often known as "frontier models") including a requirement for developers to share these AI models for testing before they are publicly released. The anticipated AI bill is expected to put existing voluntary commitments signed by some of the leading AI companies onto a statutory footing, as well as related measures such as moving the UK's existing AI Safety Institute (which would conduct model testing) onto an arm's length footing from the government.
Timing is unclear: in autumn 2024, government sources indicated its intention to consult on the matter, creating expectations that it would be treated as priority. The consultation has yet to materialise, but is expected to be in fairly early in 2025.
The government hopes that the new law will "reduce regulatory uncertainty for AI developers, strengthen public trust and boost business confidence." It is intended to tie in with other government measures on AI, including proposed changes to copyright law (see item below) and the AI Opportunities Action Plan.
The much anticipated AI Opportunities Action Plan, led by Matt Clifford, was released on 13 January 2025. The report outlines 50 recommendations for the government built upon core principles that advocate for the government to support innovators, invest in becoming a leading AI customer, attract global talent to establish companies in the UK, and leverage the UK's strengths and emerging catalytic areas. The report highlights the crucial importance of data in underpinning AI development. It urges the government to act quickly to announce how it plans to regulate frontier AI models, and to urgently reform UK intellectual property law on text and data mining.
The government has said that it endorses the plan and will "take forward" its proposals. This plan has been commissioned by the government to identify how AI "can drive economic growth and deliver better outcomes for people across the country". See this Regulatory Outlook for background.
It is notable that the government is proposing to confine its AI-specific legislation to affect only what it anticipates will be a handful of leading AI companies, those "developing the most powerful AI models of tomorrow". This differs markedly from the position of the EU, whose AI Act (see below) affects a wider range of businesses. Developers and deployers of AI systems will watch developments keenly, but for the moment, it appears likely that the EU, having attained first mover advantage with its more comprehensive AI Act, will continue to call the shots on AI regulation for many organisations.
AI and intellectual property law
In December 2024, the UK government published a consultation paper on changes and clarifications to copyright law, aimed at making the UK more attractive to the tech companies developing AI models, but balancing this with supporting the creative industries upon whose data many AI systems are trained.
The main proposal is an "opt-out" text and data mining (TDM) copyright exception allowing use of content for training AI systems unless the owner has expressly opted out of allowing that use. This new exception would apply only where the user already had lawful access to the content such as via a paid for subscription service, or where it had been made freely available online.
The government has said that it will introduce the TDM exception only if accompanied by transparency measures, probably in the form of obligations on AI developers to disclose details of:
- the copyright content and datasets used in training the AI systems
- web crawlers used to obtain the training data, and
- measures taken to ensure compliance with opt-out expressions, together with supporting measures such as record-keeping requirements.
Another key proposal is the removal of copyright protection for computer-generated works. Other changes are floated, such as obligations to label so-called deepfakes (or "digital replicas") and other AI-generated or AI-manipulated content.
The consultation is open until 25 February 2025, and it will be interesting to see which of the government's proposals are ultimately taken forward and how quickly any resulting changes to copyright law will be implemented. For more information see our Insight.
The interplay between AI and IP will also be considered by the English courts when the well-known Getty Images v Stability AI case comes to trial in June/July 2025. This case relates to the alleged use of Getty's image library by Stability AI to train its Stable Diffusion text-to-image model and concerns alleged infringements of copyright, database right and trade mark infringement.
The patentability of AI inventions will be on the UK Supreme Court's agenda this year when it hears an appeal on the Emotional Perception patent case. The Court of Appeal has held that inventions involving artificial neural networks should be treated in the same way as any other computer implemented inventions and, therefore, they must make a technical contribution in order to be patentable. Will the Supreme Court take the same approach? This is an eagerly anticipated decision and should bring clarity to the issue.
See this Insight for more on what 2025 holds for the interplay of AI and intellectual property.
Generative AI and data protection
The ICO has set out its analysis, views and current expectations on how specific areas of data protection law apply to generative AI systems, in a response to its five-part consultation series published in December 2024. Pending updated formal guidance, businesses can now review the ICO's position, which clarifies its regulatory expectations on generative AI. However the ICO also highlights that businesses still need to consider its existing core guidance on AI and data protection.
After changes to data protection law are made by the Data (Use and Access) Bill, once finalised (see Data section), the ICO intends to update its guidance to take account of the advent of generative AI. The ICO will also tailor its final position to align with its upcoming joint statement on foundation models with the CMA. See also this Regulatory Outlook.
Businesses based in the EU, or who are otherwise subject to the EU General Data Protection Regulation will also need to take account of the European Data Protection Board's (EDPB's) recently issued opinion on the processing of personal data in the context of AI models, which looks at issues including:
- when an AI model will be considered anonymous
- legitimate interest as a legal basis for the training or use of AI models, and
- whether the data protection regime catches the deployment (as opposed to the development) of AI models trained with unlawfully processed personal data.
See Data section for more.
EU AI Act
The EU AI Act entered into force on 1 August 2024, and its provisions will be applicable progressively over the next few years. Things to look out for in 2025 include:
- Prohibitions on certain categories of AI practices, which will be banned outright from 2 February 2025. These provisions relate to a list of AI applications which are considered to pose such a significant risk to health and safety or fundamental rights that they should be banned. The general provisions of the AI Act, dealing with scope and definitions, will also apply from this date.
- Provisions on general-purpose AI models, which will cover foundation models, including those underpinning many generative AI models, will be applicable from 2 August 2025, as well as the notification, governance, penalties and confidentiality provisions.
- Publication of the general-purpose AI Code of Practice, currently in draft, is expected to be finalised by May 2025. See this Regulatory Outlook for details of the first draft of the code, and this Regulatory Outlook for an outline of the drafting process.
- Guidance from the EU AI Office, which is expected to publish compliance information on some aspects of the AI Act in 2025. This should include its views on the scope of prohibited AI practices, and the definition of "AI system", following the consultation on these questions which took place before Christmas 2024.
See our Insight for more on the compliance deadlines.
The EU AI Act is of relevant for a range of players in the AI ecosystem, covering not only organisations which develop AI models and systems, but also those using them or providing them to others. The Act applies not only to AI creators and users based in the EU, but has extra-territorial reach, so that in some case it may catch entities based outside the EU, such as those based in the UK who provide AI systems to users in the EU.
Organisations that breach the prohibition provisions risk significant fines, of up to the higher of seven per cent of worldwide turnover or €35 million. Breaching the general-purpose AI provisions could lead to fines of up to the higher of three per cent of worldwide turnover or €15 million. It remains to be seen whether the regulators will take a relatively gentle approach to enforcement initially, or will look to make an early show of taking action, particularly in respect of more egregious or high-profile instances of non-compliance.
Business intending to develop or deploy AI systems who do not already have AI Act compliance programmes underway should consider whether to do so, and those with existing programmes should keep a close eye out this year for guidance which is expected from relevant regulatory authorities.