Artificial intelligence | UK Regulatory Outlook May 2024
Published on 31st May 2024
UK regulators publish their strategic approaches to AI | Launch of the AI and Digital Hub | DRCF's views on fairness in AI
UK updates
UK regulators publish their strategic approaches to AI
As requested by the government in its response to the AI white paper consultation, various UK regulators submitted their strategic approach to AI to the Department for Science, Innovation and Technology (DSIT) at the end of April, and also published them:
- Bank of England and Prudential Regulation Authority
- Competition and Markets Authority (CMA)
- Equality and Human Rights Commission (EHRC)
- Financial Conduct Authority (FCA)
- Health and Safety Executive (HSE)
- Information Commissioner's Office (ICO)
- Legal Services Board (LSB)
- Medicines and Healthcare products Regulatory Agency (MHRA)
- Office for Nuclear Regulation (ONR)
- Office for Standards in Education, Children's Services and Skills (Ofsted)
- Office of Communications (Ofcom)
- Office of Gas and Electricity Markets (Ofgem)
- Office of Qualifications and Examinations Regulation (Ofqual)
Unsurprisingly, there is wide variation in how engaged the different regulators already are with AI. Some, particularly the four lead regulators for the Digital Regulators Cooperation Forum (DRCF), have already invested considerable time and resource in building their understanding of how this technology interfaces with their respective areas of jurisdiction. Others (such as Ofgem) are less advanced, with work just beginning. We understand that these reports will feed into a "gap analysis" of UK regulation and the challenges of AI.
We have reviewed the MHRA's report in detail (see our Insight). We also held a webinar exploring the financial services regulators' strategic approach to AI – you can catch up with the recording here.
See Data Law section for the details of the ICO's response.
Launch of the AI and Digital Hub
DSIT has announced the launch of the AI and Digital Hub run by the DRCF. This multi-regulator sandbox was promised in the UK government's AI white paper response in February 2024.
The hub is an online portal that will offer free informal advice to businesses on how regulation (across the remits of the four DRCF members) applies to their business proposals. Queries can be submitted online and a combined response will be given from the relevant regulators. The intention is that this will enable businesses to check for compliance and bring products to market more quickly. It is a one-year pilot.
Queries must concern products, services or processes that are:
- innovative – a new or adapted way of conducting an activity;
- focused on AI and/or digital;
- beneficial to consumers, businesses and/or the UK economy;
- within the remit of at least two DRCF regulators.
DRCF's views on fairness in AI
The DRCF has published its views on the principle of fairness – one of the five "high level" principles outlined in the UK government's AI white paper. This follows the government's initial guidance for regulators where it set out its expectations for the existing regulators to interpret and apply the principles within their respective regulatory remits. The EHRC has contributed to the discussion.
The main challenge to fairness identified by the DRCF is algorithmic bias in the adoption of AI. Regulators can struggle to identify whether algorithmic decision-making has been biased because of the indirect nature of bias, the complexity of the models used and the connections between different data points used.
Different regulators have different powers in relation to fairness in AI. For example:
- the ICO has issued guidance on fairness as this is an important data protection principle;
- the FCA has various regulatory requirements concerning fairness that may be in play when firms use AI in relation to providing financial services;
- the CMA addresses fairness from the angle of consumer vulnerability and requiring effective competition, both of which were recently reflected in its foundation model review;
- however, Ofcom does not have direct powers to regulate fairness in AI.
This is a good example of how the approach of using existing UK regulators' powers to regulate AI creates an uneven enforcement patchwork.
ICO publishes fourth call for evidence on generative AI and data protection
The ICO has published its fourth consultation on generative AI and data protection. See this Regulatory Outlook for an overview of the previous ICO consultations on generative AI.
This call for evidence is focused on the ways organisations deploying generative AI provide individuals with an opportunity to exercise their rights to:
- be informed about whether their personal data is being processed;
- access a copy of their personal data;
- have information about them deleted, where this applies; and
- restrict or cease the use of their information, where this applies.
This call for evidence closes on 10 June 2024 and the responses can be provided using this form.
Automated Vehicles Bill
The Automated Vehicles Act received Royal Assent on 20 May 2024. See this Regulatory Outlook for more details on it.
AI legislation on the horizon?
The Artificial Intelligence (Regulation) Bill has lapsed with the dissolution of Parliament ahead of the general election on 4 July 2024 (although it was very unlikely to become law). During the last debate in the House of Lords, a Labour peer commented that "A Labour government would urgently introduce binding regulation and establish a new regulatory innovation office for AI". We are monitoring for more information about the Labour Party's plans in this respect in the coming weeks. Our working assumption is that the Conservative Party would continue the current government's approach to regulating AI, if it were re-elected.
EU updates
AI Act timings
The corrected final text of the AI Act was adopted by the European Parliament in late April and on 21 May was also adopted by the Council of the EU. The final text is available here.
The final legislative step is publication of the Act in the Official Journal of the EU, which is expected in late May or early June, becoming law 20 days later in late June or early July.
Our Insight explains the various deadlines for compliance, starting with the prohibitions on certain types of AI which will come into effect at the end of this year.
International updates
OECD updates AI Principles
The Organisation for Economic Co-operation and Development (OECD) has updated its AI Principles to deal more specifically with issues including privacy, intellectual property rights, AI safety and information integrity. Key changes include the following:
- if AI systems risk causing undue harm or show undesired behaviour, there should be robust mechanisms and safeguards to override, repair, and/or decommission them safely;
- mechanisms should be in place to strengthen information integrity while respecting freedom of expression;
- AI risks and accountability should be addressed throughout the AI system lifecycle by a responsible business approach, including co-operating with suppliers of AI knowledge and AI resources, AI system users, and other stakeholders;
- information about AI systems needed for transparency and responsible disclosure should be clear;
- environmental sustainability should be part of the responsible stewardship of AI; and
- jurisdictions should work together to promote interoperable governance and policy environments for AI.
AI Seoul Summit
Following the UK's Bletchley Park AI Safety summit in November 2023 (see this Regulatory Outlook), the second summit in the series took place in South Korea on 21 and 22 May 2024.
On the first day, South Korea was joined by representatives from France, Germany, Italy, Canada, the US, Australia, Japan, Singapore, the EU and the UK. The "Seoul Declaration for safe, innovative and inclusive AI" was agreed, alongside a statement committing to international collaboration on AI safety science. In addition, leading AI companies agreed to voluntary safety commitments in relation to frontier AI focused on responsible development and deployment of these AI systems.
On the second day, a wider group of 28 countries (including China) and the EU discussed AI safety but also looked at how to boost inclusivity and innovation (and published this statement). The group also considered how trustworthy and sustainable AI can boost productivity.
Ahead of the summit, the UK published its interim "International Scientific Report on the safety of advanced AI". The report seeks to contribute to the debate around AI safety rather than making recommendations and is limited in scope to the current state of understanding of general-purpose AI and its risks. Its conclusions are essentially that there are a great deal of unknowns and a wide variety of opinions, meaning that the future of general-purpose AI is very uncertain, but nothing is inevitable. The final report is planned to be published before the next AI summit in France.
And finally …
We have launched a series of Insights exploring the implications of the EU AI Act for life sciences and healthcare businesses. Over the coming months, the series will cover AI supply chains, product logistics, research and development, SMEs, compliance monitoring, liability, and more. Here are the Insights we have published so far:
- New EU AI Act is poised to shape the future for life sciences in Europe
- New AI legislation's reach extends into European healthcare
- High-risk AI systems in life sciences face strict regulatory scrutiny from new EU rules
- A new CE marking for European healthcare: when and why?
- Low-risk AI bears high stakes for digital health in new EU regulation