Regulatory Outlook

Artificial Intelligence | UK Regulatory Outlook February 2025

Published on 27th Feb 2025

UK updates: Data Bill to include AI | AI Safety Institute to focus on national security | First international AI safety report | Government publishes AI cyber security code of practice.  EU updates: First provisions of AI Act in effect |  First AI Act guidelines published | AI Act draft code of practice delayed | AI Liability Directive withdrawn 

Icon for artificial intelligence

UK updates 

Data (Use and Access) Bill amended to include AI provisions 

Data (Use and Access) Bill is progressing swiftly. It has completed its passage through the House of Lords, had first and second readings in the House of Commons, and is into committee stage. The new version of the data bill was published following the Lords stage. 

Various amendments were made in the Lords, some of which have direct relevance to use of copyright works for artificial intelligence (AI) development. The new clauses 135 to 139 cover the use of "web crawlers" for content scraping, especially for AI purposes. These include provisions requiring, for example, compliance with UK copyright law (even if web scraping occurs abroad), and disclosure of the web crawlers used and of the sources of data used to train any AI models. 

The AI-related amendments in particular have sparked much debate. The government has not yet indicated whether it intends to attempt to reverse any of these changes. 

For more on the changes made to the bill by the Lords and its expected progress through the Commons, see the data law section.

AI Safety Institute gets rebrand with new focus on national security 

The Department for Science, Innovation and Technology (DSIT) has announced that the AI Safety Institute (AISI) will be renamed the AI Security Institute (conveniently not requiring a new acronym).  

The science minister, Peter Kyle, said the AISI will have a "renewed focus" on national security and protecting citizens from crime and "will ensure our citizens – and those of our allies – are protected from those who would look to use AI against our institutions, democratic values, and way of life". 

He said that the new name better reflects AISI's efforts to address "serious AI risks with security implications," such as how the technology might be used to develop chemical and biological weapons, carry out cyber-attacks, or commit fraud and child sexual abuse. 

AISI will not focus on bias or freedom of speech, he added, "but on advancing our understanding of the most serious risks posed by the technology to build up a scientific basis of evidence which will help policymakers to keep the country safe as AI develops". 

The renamed institute will partner across government to achieve these security aims, including the Defence Science and Technology Laboratory (the Ministry of Defence's science and technology organisation) and the Home Office. 

First international AI safety report 

In preparation for the French AI Action Summit, the UK government has announced the publication of its first report on AI safety, with input from 100 world-leading experts led by AI luminary Yoshua Bengio. The report analyses emerging AI risks with the intention that it will guide policymakers' decision-making by helping them understand general-purpose AI capabilities, risks and possible mitigations. 

The government says that the report "sets a new standard for scientific rigour in assessing AI safety" and intends it to become the "global handbook" for AI safety. 

UK government publishes AI cyber security code of practice 

See Cyber section 

Treasury committee launches inquiry into AI in financial services 

See Fintech, digital assets, payments and consumer credit section.

EU updates

First provisions of the EU AI Act come into effect 

The first parts of the EU AI Act came into full effect on 2 February and included restrictions on prohibited AI practices and general AI literacy obligations.  

The following applications of AI are now banned (under article 5): 

  • Harmful AI-based manipulation and deception
  • Harmful AI-based exploitation of vulnerabilities
  • Social scoring
  • Individual criminal offence risk assessment or prediction
  • Untargeted scraping of the internet or CCTV material to create or expand facial recognition databases
  • Emotion recognition in workplaces and education institutions
  • Biometric categorisation to deduce certain protected characteristics 

Real-time remote biometric identification for law enforcement purposes in publicly accessible spaces. The AI literacy obligations (in article 4) mean that providers and deployers of AI systems must take measures to ensure, so far as possible, a sufficient level of AI literacy for staff and other people dealing with the operation and use of AI systems on behalf of the provider or deployer.  

AI literacy is defined as: "skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause". 

The measures taken should take account of the relevant persons' technical knowledge, experience, education, and training, as well as the context the AI systems are to be used in and the persons on whom the AI systems are to be used. 

See this AI resource from the European Commission. 

See also this Insight for details of the AI Act requirements, this Insight for the implementation timetable, and this Insight for details of the AI literacy requirements

First EU AI Act guidelines are published 

The European Commission has published its long-awaited guidelines on the definition of "AI system" and on prohibited categories of AI

The guidelines on the definition of AI system do little beyond parsing the definition, straightforwardly applying the recital and giving predictable examples. The key is the ability of a system to act autonomously and infer how to generate output based on inputs. Section 5.2 gives examples of types of computation which will be out of scope. 

Consequently, the definition is widely drafted, so it will be difficult for actual AI systems – even basic ones – to avoid being regulated. The guidelines make clear that the AI Act is not intended to capture the application of standard mathematical, statistical and computing techniques that have been around for years. 

Article 5(1)(a) prohibits the use of subliminal, manipulative and deceptive techniques are in general interpreted widely.  

For instance, subliminal techniques include drawing attention to specific eye-catching content to prevent someone noticing another part of the content. They also include techniques which target persuasive messages based on an individual's data. 

Article 5(1)(b) prohibits the use of AI systems that exploit vulnerabilities. It could cover systems that exploit the reduced cognitive vulnerabilities of some elderly people by targeting them with services, such as unnecessary insurance policies. 

The "significant harm" element can be met where the harm is to someone other than the person whose vulnerabilities have been exploited. For example, harms to society could also be relevant, such as increased healthcare costs and reduced productivity. 

Article 5(1)(c) prohibits certain uses of AI evaluation and classification based on social behaviours or personal characteristics. This is known as social scoring. Some types of marketing and service provision could be caught; for example, if individuals are unfairly offered worse deals based on profiles drawn from a wide range of data points and some of this should not be relevant to the unfavourable treatment received. 

EU AI Act's draft code of practice delayed 

The code of practice on general-purpose AI, made under the EU AI Act, is currently in draft and due to be finalised by the end of April – and preparatory to the AI Act provisions on general-purpose AI coming into force in August 2025. While compliance with the code is, theoretically, not mandatory, demonstrating adherence to it would put companies in a strong position to demonstrate compliance with the AI Act. 

The third draft was supposed to be published on 17 February, accompanied by a survey to allow participants to give feedback. However, this interim deadline was extended at the last minute, reportedly following a request for more time from some of the independent AI experts drafting the code. The third draft is expected in February or March.  

Some stakeholders involved in the process have expressed concerns about the direction of travel of the drafting, saying that it goes beyond what the EU AI Act mandates in respect of copyright law and third-party testing of AI models and that some of the requirements are technically unfeasible. The US government has also expressed concerns about what it considers to be "excessive" regulation of AI by the EU. 

Proposed EU AI Liability Directive to be withdrawn 

The European Commission's work programme for 2025 shows the AI Liability Directive on the list to be withdrawn. This is subject to approval by the EU Parliament and Council. The move has provoked some criticism and, it has been reported that the EU's trade commissioner indicated during a press conference that it might survive after all. Maroš Šefčovič said that inclusion on the withdrawal list is an "invitation" to legislators to tell the Commission if they want to retain it. 

Share

View the full Regulatory Outlook

Interested in hearing more? Expand to read the other articles in our Regulatory Outlook series

Expand
Receive Regulatory Outlook each month

A round-up of forthcoming regulatory developments – straight to your inbox

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?