Artificial intelligence

What is the latest on international, EU and UK initiatives to regulate artificial intelligence?

Published on 29th Nov 2023

After a flurry of autumn activity, we review the current position for AI regulation as 2024 approaches

Green code on smartphone and laptop screens

Artificial intelligence (AI) regulation is a "quick, quick, slow" process. We've been discussing it, literally, for years but policymakers continue to debate how best to go about it and lawmakers continue to wrestle over new laws. What progress has been made since our last round-up in June?

International initiatives

The G7 Hiroshima Process

On 30 October, as part of the Hiroshima AI Process, the Group of Seven (G7) nations (Canada, France, Germany, Italy, Japan, UK and USA, as well as the European Union) finalised its 11 International Guiding Principles to govern AI. A voluntary code of conduct was also published. Both are stated to be "living documents" to be adapted over time.

The agreed 11 Guiding Principles underpin the code and, in turn, build on the Organisation for Economic Co-operation and Development AI principles. In summary, they deal with aspects of risk management, transparency, good governance, research priorities and standardisation. The code of conduct adds detail and granularity to each of the principles about what is expected of developers. The G7 hopes that organisations developing advanced AI systems will sign up to it.

The code of conduct is the outcome of what were originally bilateral discussions between the EU and US through the EU-US Trade and Technology Council about a voluntary code of conduct. The European Commission has welcomed the code.

The UK AI Safety Summit

The UK government's international AI Safety Summit took place on 1 and 2 November. Its futuristic focus was the safety of "frontier AI" models, being "highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models".

Outcomes of note included:

  • Discussion of safety testing by the leaders of both countries and major AI developers. The UK government has published a paper on emerging processes for frontier AI safety and many of the participants have published their AI safety policies.
  • Commissioning a "State of the Science" report on the capabilities and risks of frontier AI, to be published ahead of the next AI Safety Summit (in Korea in Spring 2024).
  • Announcement of the Bletchley Declaration, signed by the 29 countries in attendance (including the EU), confirming their commitment to building a shared scientific and evidence-based understanding of AI safety risks, developing risk-based policies to address those concerns, along with evaluation metrics and safety testing, building public sector capability and scientific research, and encouraging collaboration and transparency by private actors.
  • Launch of the AI Safety Institute (growing out of the UK Frontier AI Taskforce), to focus on the most advanced current AI capabilities to ensure that future developments in AI do not catch the world off guard. Separately, the US announced its own US AI Safety Institute.

EU developments

AI Act crunch time

The three EU institutions responsible for new legislation – the Commission, the Council of the EU and the European Parliament – are in the final stages of agreeing the terms of the AI Act. However, significant problems have emerged in relation to regulating "foundation models". Foundation models are an advanced form of AI that has emerged since the original AI Act draft of April 2021. They are typically trained on vast, unlabelled, unstructured datasets, to perform a task with many potential applications. They can therefore be used as a building block for AI systems with many different end-uses.

France, Germany and Italy are reported to be against regulating foundation models at all because of EU tech sovereignty-related concerns about stifling innovation and creating barriers to developing EU capability in this field led by US and Chinese businesses. The European Parliament, on the other hand, considers some regulation of foundation models to be essential. The Commission has proposed a compromise, focused on transparency, technical documentation, and information about training data in relation to foundation models. The compromise text seeks to address Parliament's concern that downstream developers building a high-risk AI application incorporating a foundation model would otherwise struggle to meet their own AI Act obligations because of a lack of transparency around the underlying foundation model. Meanwhile, discussions continue between Member States.

Other outstanding areas still being negotiated include the final list of prohibited AI, the final list of high-risk AI, and overall governance of the AI Act to ensure coherence and consistency of interpretation and enforcement across the Member States.

Next June's parliamentary elections are putting significant time pressure on this process. There are effectively only a few weeks left to break the current deadlock in order that the various legislative formalities and the detail of final drafting can be completed before the current parliamentary term ends.

AI liability directive adrift?

Meanwhile, there has been minimal progress on the proposed EU legislation to facilitate private litigation in relation to harm caused by AI.

It now appears very unlikely that the AI liability directive will be in place before the end of the current EU parliamentary term. The newly elected parliament will appoint a new Commission, one of whose early tasks will be to decide which unfinished legislation to withdraw and which to continue.

UK developments

White paper response by the end of the year?

Following on from the consultation on its white paper on AI, published in July, the UK government is expected to publish its response by the end of this year. Some regulators have been forging ahead with policy development around AI (such as the Competition and Markets Authority's interim paper on foundation models), while others are believed to be soft-pedalling, pending clarity around government policy.

It does not appear that there will be any legislation to reinforce the high-level principles proposed in the white paper. Despite calls for a statutory duty on regulators to have regard to them, this was not included in the King's Speech setting out the government's legislative agenda of earlier this month.

A private member's bill on AI

Meanwhile, on 22 November, the Artificial Intelligence (Regulation) Bill was introduced into the UK Parliament. This is a private member's bill, so does not have government backing and is unlikely to be given much parliamentary time. It is not expected to become law, but it is an interesting addition to the debate of how best to regulate AI.

The bill does not contradict the essential approach in the UK white paper of letting existing regulators lead, but proposes adding a layer of oversight through a new body called the AI Authority. The authority would ensure consistency between regulators, check for regulatory gaps and effectiveness, and monitor the horizon for signs that a change is needed. It would also appoint authorised independent AI auditors. Any business that "develops, deploys or uses AI" would be required to appoint an AI responsible officer and would be put under overarching obligations in relation to areas such as transparency, bias and discrimination, authorised use of third-party IP and data, and authorised third-party audits of their AI.

The AI Authority would be subject to a statutory duty to have regard to various principles that include, but are wider than, the white paper's high-level principles. The bill does not seek to impose such a duty on the wider body of UK regulators.

New AI regulatory framework for autonomous vehicles

In contrast to both the government's position that there will be no new UK law in the short to medium term around AI, and the AI private member's bill, a new government bill to provide a regulatory framework for AI-powered autonomous vehicles has been introduced into the UK Parliament.

The Automated Vehicles Bill will create a definition of "self-driving" and a rigorous safety and authorisation regime for self-driving vehicles. Authorisation will be required for the organisation responsible for how an automated vehicle drives, as well as a licensing regime for companies responsible for the operation of "no user in charge" (NUiC) vehicles. The bill will remove liability from users of a NUiC vehicle for the driving-related aspects of its use, but not for non-driving aspects, such as insurance, roadworthiness and use of seatbelts.

A UK election on the horizon

The UK will have a general election by the end of January 2025 at the latest, with most commentators now expecting it to take place in October 2024. If the current governing party were to be replaced, the likely replacement would be the Labour Party.

No formal policy has been announced by the Labour Party on AI legislation although the shadow AI minster, Matt Rodda, has stated that it is a strategic priority. The party is reported in the UK press to be focused on rapidly introducing binding requirements on businesses developing powerful AI. These will include obligations to report before they train models over a certain capability threshold and to carry out safety tests strengthened by independent oversight. At an event held at Osborne Clarke's offices on 15 November 2023, Rodda indicated that the Labour Party would look to provide "greater regulatory certainty" around AI governance, if elected.

Osborne Clarke comment

The international political will to "do something about AI" came to a head as October turned into November, with the G7 Hiroshima Process initiatives and a new US presidential executive order on AI being announced a few days ahead of the UK AI Safety Summit. However, the international initiatives are voluntary in nature, without enforcement mechanisms or sanctions.

The AI Act remains the one of the primary examples around the world of new, AI-specific binding legislation, and is another example of the EU seeking to set the regulatory gold standard. As such, it feels unlikely that the current negotiating stalemate will not be overcome. However, commentators have expressed concerns that the intense time pressure may lead to significant compromises (some might say horse-trading) in order to save the legislation as a whole. Final hour compromises could reduce legal certainty and clarity and those involved in negotiations have expressed their fears that this may result in a regulatory framework that is not as well designed as it might have been with less time pressure.

Of course, AI is not unregulated while we await the AI Act. Enforcement activities (including sanctions) regarding AI issues will continue to be carried out by regulators including EU data protection authorities (as has occurred in Italy) and, increasingly, also by consumer protection authorities.

In the UK, there is an interesting juxtaposition between the "no new law" approach of the government, the private member's bill proposal of a slim layer of horizontal regulatory oversight and overarching obligations for businesses, and the UK's Automated Vehicles Bill. The latter is an example of "slow-baked", tailored legislation, drawing on a four year-long detailed review of required reform to existing rules by the Law Commissions of England and Wales, and Scotland. It is, moreover, an example of regulation that should drive investor confidence to grow this new area of technology. The bill will need to be finished before the end of the current parliamentary session, likely to be driven by the timing of the next UK general election.

If you would like to discuss any of these issues, do not hesitate to contact the authors or your usual Osborne Clarke contact.

Share

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?