Artificial intelligence | UK Regulatory Outlook March 2025
Published on 26th March 2025
UK AI (Regulation) Bill reintroduced to Parliament | UK Government outlaws AI tools designed to generate child sexual abuse material | UK ICO consultation response on AI and copyright | UK DRCF case studies based on its AI and Digital Hub advice | Third draft of EU AI Act code of practice | EU Commission indication on general-purpose AI guidance | California's AI safety bill re-emerges

UK updates
Artificial Intelligence (Regulation) Bill reintroduced to Parliament
The Artificial Intelligence (Regulation) Bill was reintroduced to Parliament on 4 March 2025 by Lord Holmes. The bill was dropped last year after the general election was announced (see this Insight for background). The bill includes proposals to:
- Create an AI authority, which would ensure that regulators consider and align their approaches on AI, identify gaps in AI regulatory responsibilities, coordinate review of relevant laws, and so on.
- Establish regulatory sandboxes to allow businesses to test AI innovations with real consumers.
- Require businesses developing, deploying or using AI to designate an AI officer.
- Mandate organisations involved in training AI to report to the AI authority on third-party data and intellectual property (IP) used in that training, including providing assurances that the data is used with consent and otherwise in compliance with IP laws.
- Require organisations supplying a product or service involving AI to provide health warnings, labelling, and consent options to customers.
- Mandate independent audits of AI systems.
As a private members' bill, it is not backed by the government and it is rare for such bills to become law, but they can highlight and spark debate in important areas, thus influencing the government-backed legislation.
Government to outlaw AI tools designed to generate child sexual abuse material
The government aims to criminalise equipment and systems (many of which will be AI-based) which are made or adapted to create child sexual abuse (CSA) material via the new Crime and Policing Bill, which was introduced on 25 February 2025. Legislation in this area is currently piecemeal:
- The government has a manifesto commitment to ban the creation of sexually explicit deepfakes, however this legislation has not yet materialised.
- It is already an offence to share or threaten to share intimate photograph or film, including AI-generated, under the Sexual Offences Act 2003 (SOA).
- The House of Lords has added a new clause to the draft Data (Use and Access) Bill amending the SOA to make it an offence to create or solicit the creation of a purported intimate image of an adult without consent.
- This change to the Data (Use and Access) Bill relates only to adults because similar acts in relation to children are already offences (for example, under the Protection of Children Act 1978).
Clause 36 of the new bill proposes to close a gap by making it an offence to "make, adapt, possess, supply or offer to supply" a "CSA image-generator". "CSA image-generator" would include anything (including services, software and electronic information) used to create (or facilitate the creation) of CSA photos, pseudo-photos, or other images. Adapting anything so that it becomes a CSA image-generator would also be an offence.
Although the scope of clause 36 of the bill, as currently drafted, appears fairly narrow (since the image-generator needs to be specific to CSA material), providers of image-generation software will likely be carefully considering the detailed provisions, and watching for any changes as it proceeds through the parliamentary process.
See more on the bill in the Digital regulation section.
The bill has now entered the committee stage in the House of Commons, where it will be scrutinised line by line. The Public Bill Committee has launched a call for evidence, which is a chance for those with relevant expertise and experience, or a special interest, to submit views on the bill for the committee's consideration. Written evidence is requested to be submitted as soon as possible, and no later than 5pm on 13 May 2025.
ICO responds to consultation on copyright and AI
The Information Commissioner's Office (ICO) has published a response to the government's consultation on copyright and AI (see our Insight for background). While the ICO does not regulate UK copyright law, some copyrighted material will contain personal data, and the ICO believes that the government should consider the interaction between data protection and any modified future copyright regime, to avoid inadvertently promoting practices which would be lawful under copyright law but may be unlawful under data protection laws.
The ICO highlights that if the government proceeds with its proposal to extend exemptions for text and data mining (TDM), and/or an "opt-out" option (mooted as possible ways to facilitate use of copyright materials for AI training), it must clarify that this will not in and of itself be a lawful basis for any personal data processing. Since a substantial amount of the material involved in TDM could be personal data, each case must be assessed individually.
Entities "opting in" (or not "opting out") their data for such processing should be aware of their potential data protection obligations, for example when their content contains personal data for which they are controllers.
The ICO emphasised the importance of transparency, ensuring that data subjects are aware of how their data may be accessed by web-crawlers to use for AI training. This is in keeping with the ICO's views on data use for generative AI (and also the European Data Protection Board's position), as is the reminder that data protection compliance requirements should be part of licensing agreements for AI training data when personal data is involved, and those who rely on or put in place these licences should ensure that those requirements are met.
The ICO also recommends gathering views from organisations hosting personal data that is mined or scraped, such as websites or social media platforms, as it is crucial to understand how they monitor entities scraping their sites for data, and whether they disclose this information in their privacy policies.
DRCF showcases first case studies based on its AI and Digital Hub advice
The Digital Regulation Cooperation Forum (DRCF) has published the first in a series of case studies based on informal advice provided by its AI and Digital Hub. The hub allows businesses to get advice from all four DRCF member regulators via a single query. Case studies relate to:
- Data protection and consumer law issues for a business helping SMEs to deploy AI.
- Managing the impact of third party software defects on resilience to software flaws.
- Use of AI to automate reviews of advertisements for financial promotions.
- Navigating data privacy, online safety and consumer law in AI-enabled health discussion forums.
- Perimeter guidance and data considerations for AI-enabled financial services compliance tool.
The DRCF comprises the ICO, the Competition and Markets Authority, Ofcom and the Financial Conduct Authority. Though the case studies are of course fact-specific, they provide a useful additional resource of the authorities' approaches to these and similar issues.
EU updates
EU AI Act: third draft of the general-purpose AI code of practice published
The third draft of the general-purpose AI code of practice published has been published some three weeks later than was intended, reportedly following concerns raised by some AI developers. It incorporates changes based on the feedback received on the second draft published in December 2024. Once finalised, the (voluntary) code will be a tool for general-purpose AI model providers to demonstrate compliance with the EU AI Act.
The draft code is based on a list of high-level commitments and provides detailed measures to implement each commitment. It includes two commitments on transparency and copyright for all providers of general-purpose AI (GPAI) models, and a further 16 commitments on safety and security only for providers of GPAI models classified as GPAI models with systemic risk.
The new draft includes a form for documenting details of GPAI models in order to facilitate compliance with transparency requirements. The section on documenting copyright transparency information is described as being "simplified and clearer"; however some critics have complained that provisions which would have benefited copyright owners have been watered down. Alongside the draft code, the EU AI Office is working on a template form for developers to use in order to provide the public summary of the data used to train their general-purpose models, which is a requirement under the AI Act.
Stakeholders are invited to provide feedback on the third draft by 30 March 2025, as well as in further discussions and workshops, following which the final round of drafting will take place. According to the latest timeline, the final code should be ready "from" May 2025, having been previously slated for the end of April.
EU AI Act: EU Commission indicates the direction of travel on guidance for general-purpose AI
At the same time as publishing the third draft of the code (see preceding item), the Commission stated that the EU AI Office will publish guidance for general-purpose AI, including on:
- Definitions of "general-purpose AI model", of "placing on the market", and of "provider".
- Clarification of responsibilities along the value chain, such as to what extent rules apply to downstream actors modifying or fine-tuning a GPAI model.
- The exemption for models provided under free and open-source licences.
- The effects of the AI Act on models placed on the market before August 2025.
No dates are given for publication. Businesses developing or looking to source AI systems will hope that it comes soon, to help them understand the scope of their obligations.
International updates
Re-emergence of California's AI safety bill
The California lawmaker behind the AI bill SB 1047, Scott Wiener, has introduced a new bill (SB 53) after the original was vetoed by California's governor, Gavin Newsom, last year (see this Regulatory Outlook).
The original bill contained extensive provisions addressing AI safety issues, including requiring safety testing of AI models before their deployment, and attracted criticism from some technology firms. It has now partially re-emerged in a watered-down version, including two aspects of the original bill to which the governor had not expressed opposition.
The new bill now focuses on whistleblower protections, including making it illegal for companies to retaliate against employees who alert the authorities or an internal company reporting officer to concerns about AI developments relating to "critical risks". It would apply to developers that have trained one or more foundation models using computational power that costs at least $100 million.
Critical risks would include some AI developments resulting in the death of, or serious injury to, more than 100 people, or more than $1 billion in damage, where caused by:
- The creation and release of a chemical, biological, radiological, or nuclear weapons.
- A cyberattack.
- The AI carrying out actions which would be criminal if done by a human, or evading control by its developer/user.
The bill joins a raft of other draft AI-specific legislation in California (and other states), covering ground as diverse as algorithmic decision-making, non-consensual deepfake pornography, watermarking of AI generated content, transparency obligations for AI chatbots, and disclosure of training data sources. How many of them make it onto the statute books, and with what changes, remains to be seen.