Regulatory Outlook

Artificial intelligence | UK Regulatory Outlook September 2024

Published on 25th Sep 2024

UK House of Lords Select Committee inquiry into scaling up AI and creative tech | House of Lords introduce AI Bill into Parliament | ICO's fifth call for evidence on generative AI

reg outlook ai icon

UK updates

UK House of Lords Select Committee inquiry into scaling up AI and creative tech

The UK House of Lords Communications and Digital Committee has launched an inquiry looking at the challenges startups face when seeking to scale up in AI and creative technology.

According to the committee, various initiatives have tried to boost the UK's scale-up potential in recent years, but there are still significant barriers preventing success. Instead of innovative businesses seeking to scale up in the UK, they are often focused on selling to foreign investors or moving abroad, and, as a result, the UK risks losing its competitive edge in strategic economic sectors.

The inquiry will therefore look at what actions are needed from government and industry over the next five years to maximise the economic potential of the sectors in question – technology in the creative industries and AI – and to secure competitive advantage. 

The deadline for giving evidence is 16 October 2024.

House of Lords introduce AI Bill into Parliament

Liberal Democrat peer Lord Clement-Jones has introduced a private members' bill entitled "Public Authority Algorithmic and Automated Decision-Making Systems Bill" to help regulate the use of algorithms and automation in public sector decision-making processes. See our Insight for an overview.

ICO's fifth call for evidence on generative AI

The UK Information Commissioner's Office (ICO) conducted its fifth (and final) call for evidence in relation to generative AI, which closed on 18 September 2024. See this Regulatory Outlook for an overview of the previous ICO consultations on generative AI.

This consultation focused on allocating controllership across the generative AI supply chain. It addressed the recommendation for the ICO to update its guidance on the allocation of accountability in relation to AI as a Service (AIaaS), to clarify when an organisation is a controller, joint controller or processor for processing activities relating to AIaaS.

The ICO has provided a summary of its current analysis on the topic as part of the call for evidence. While the consultation provided some indicative scenarios of processing activities, the ICO noted that its list of scenarios was non-exhaustive. The regulator sought evidence on additional processing activities and actors not included in the call, alongside the relevant allocation of accountability roles. Interestingly, the ICO states in its analysis that it thinks that many organisations are assuming that their relationship will be that of controller-processor, whereas the ICO indicates that joint controllership will often be more appropriate.

Government AI Action Plan

The new science secretary, Peter Kyle, has commissioned an AI Opportunities Action Plan to identify how AI "can drive economic growth and deliver better outcomes for people across the country".

The Action Plan will consider how the UK can create an AI sector which is competitive globally, adopt AI to enhance growth and productivity, use this technology to enhance people's interaction with the government and strengthen the enablers of AI adoption, such as data, infrastructure, public procurement, and regulatory reforms. Matt Clifford (Chair of ARIA, the UK's Advanced Research and Invention Agency and organiser of the previous government's Bletchley Park AI Safety Summit) will lead development of the Action Plan.

UK AISI plans to build safety case 'sketches' for more advanced models than exist today

The UK AI Safety Institute (AISI) has announced its plans to conduct a series of collaborations and research projects to build safety case "sketches" for more advanced models than exist today, as part of its evaluations of frontier AI models. According to the UK government, a "safety case" is a "structured argument" which is supported by evidence and provides a case that a system is safe for a particular application in a specific environment.

AISI says that since the understanding of AI safety is currently at early stages, it is not possible to build full safety cases that would assess the risks posed by models which are significantly more advanced than those available today, such as risks from loss of control and autonomy. Therefore AISI proposes to build an understanding of what such safety cases might look like in the future by developing "sketches" that detail the arguments and evidence it expects for specific safety methods. AISI has already started research collaborations on safety sketches with a variety of organisations.

ICO responds to Meta's announcement on training AI with user data

Following engagement with the ICO and having taken on board feedback from the regulator, Meta has announced that it will begin training its generative AI models utilising content shared publicly by adults in the UK on its social media platforms, Facebook and Instagram. Meta emphasised that only public information will be used for this purpose (and not, for example, users' private messages), and not information from accounts of users aged under 18. Users will be able to object to their data being processed for the purposes of AI training at any time, using an objection form which the company says is now simpler, more prominent and easier to find.

The ICO has responded to the company’s announcement, emphasising the importance of informing users about the ways their personal data is used when training generative AI, and of having safeguards in place before processing the data for this training. Safeguards include providing users with a clear and simple way to object to the processing.

EU updates

AI Act: call for expression of interest to participate in drafting of code of practice and consultation on trustworthy general-purpose AI models

The European AI Office received strong expressions of interest from stakeholders in participating in drawing-up of the first General-Purpose AI Code of Practice under the EU AI Act. The call for expressions of interest closed on 25 August.

The code will set out further details as to how providers of general-purpose AI models and general-purpose AI models with systemic risk can ensure compliance with the EU AI Act (with the rules applicable to providers of general-purpose AI and general-purpose AI with systemic risk 12 months after the entry into force of the EU AI Act, that is 2 August 2025). Providers will be able to "rely on" the code to demonstrate compliance with the EU AI Act.

The code will be prepared in an iterative drafting process which is briefly explained in a useful diagram included in the call for expressions of interest. The first stage comprised the call for expression of interest and a multi-stakeholder consultation. Through the multi-stakeholder consultation (which closed on 18 September 2024), the AI Office aimed to collect views and inputs on the topics covered by the code. The consultation's outcomes will form the basis for the initial draft of the code.

The call for expression will allow the AI Office to identify the stakeholders who will become part of a Code of Practice Plenary. The first plenary is scheduled for 30 September‎ 2024. 

The final version of the code should be presented by April 2025. After it is published, the AI Office and the AI Board will assess its adequacy. It will then be for the EU Commission to decide whether to approve and implement the code across the EU.

EU Commission publishes report on competition in generative AI and virtual worlds

The EU Commission has published a policy brief on competition in generative AI and virtual worlds. The report takes into account the responses to the Commission's consultation which closed in March 2024 (see this Regulatory Outlook), market investigations and collaboration with other competition authorities, including in the UK.

The report acknowledges the positive impact these technologies are set to have on many sectors, among others, manufacturing, retail, finance, education, energy and healthcare. However, it considers that their unprecedented rapid growth is also likely to pose various challenges, including in relation to competition policy and enforcement. The report, among other things, explores competition dynamics and tendencies and potential concerns in generative AI related markets, competition enforcement in this area and the role of the implementation of the Digital Markets Act in it.

EMA publishes guiding principles on use of LLMs in medicines regulation

The European Medicines Agency (EMA) sets out its guiding principles for both users and organisations to ensure that general-purpose large language models (LLMs) are used safely and ethically in regulatory science and for medicines regulatory activities. A one page infographic summary is also available.

User principles include:

  • taking appropriate measures to ensure safe input of data – for example, careful prompt drafting, checking that no sensitive information or IP protected content is input;
  • applying critical thinking and cross-checking to outputs – for example, it suggests redrafting output to "prevent the risk of copyright violation or plagiarism";
  • continuously learning how to use LLMs effectively; and
  • knowing who to raise concerns with and report issues to, including reporting severely biased or erroneous outputs.

The organisational principles, include:

  • establishing governance that helps users use LLMs safely and responsibly, for example, sets policies around specific use cases, risk monitoring;
  • help users maximise the value of LLMs through training; and
  • encourage collaboration and sharing experiences.

While there is nothing particularly unexpected in the principles, companies active in these areas, and those who look to supply AI systems to them, should pay heed.

Mario Draghi's report on the future of EU competitiveness spotlights AI

The EU Commission tasked Mario Draghi with preparing a report setting out a vision for EU competitiveness. The wide-ranging report covers many areas, one area of focus being the importance of digital technology, including AI. According to Mr Draghi:

  • The EU must strive both to lead on the development of AI, and to integrate it into existing industries, in sectors such as robotics, defence, pharma, healthcare, transport and energy.
  • Very major investment will be needed to meet the huge and increasing costs of training new foundational AI models, and for providing the enormous computing power required.
  • Complexity, plus overlap and inconsistency vis-à-vis the EU AI Act and the EU GDPR, creates higher burdens for EU creators of cutting edge AI, and so undermines investment in EU AI.

Mr Draghi's report starkly highlights the potential of AI to transform economies, and should spur further investments in take-up of AI by Europe-based companies.

US updates

Californian legislators have had a busy summer putting AI laws onto the statute books, including:

  • Assembly Bill (AB) 2655, which obliges large online platform companies to remove or label digitally generated or modified deepfake content which relates to elections, and allows election candidates and elected officials and some others to apply for injunctions in respect of the deepfake if a relevant platform is not complying with it.
  • AB 2355 requires that any use of AI-generated or altered political election advertisements has to be labelled so that users are made aware of its deepfake nature.
  • AB 2839 aims to curb manipulated content that could harm a candidate's reputation or public confidence in an election’s outcome, with the exception of parody and satire. Under the legislation, candidates, election committees or elections officials (among others) could seek a court order to get deepfakes taken down.
  • AB 2602 requires entertainment studios to obtain appropriate permission from an actor before using AI to replicate their voice or image.
  • AB 1836 prohibits companies from creating digital replicas of dead performers unless they have obtained relevant consent from the deceased's estates. Again, the law allows for injunctions to tackle non-compliance.

Beyond these newly-enacted Californian AI laws lie others, including the most controversial of all, the "Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act" (usually known as SB 1047).  This bill would require developers building large models to assess whether they are "reasonably capable of causing or materially enabling a critical harm". Where risks are identified, companies would be obliged to put in place reasonable safeguards against them. There is as yet no firm indication of whether Governor Gavin Newsom will sign this contentious bill into law. He has until 30 September to decide.

International updates

Council of Europe Convention on AI, Human Rights, Democracy and the Rule of Law signed by EU, UK, US

The UK, EU and US became the first signatories to the Council of Europe's Framework Convention on AI and Human Rights, Democracy and the Rule of Law on 5 September 2024. See this Regulatory Outlook for details. Since then, the Convention has been aligned with the EU AI Act.

The EU Commission has signed the treaty on behalf of the EU, and says that the Convention will be implemented in the EU by means of the EU AI Act. The next steps include the Commission's proposal for a Council decision to conclude the convention and the consent from the European Parliament.  According to the UK government, "once the treaty is ratified and brought into effect in the UK, existing laws and measures will be enhanced."

Share

View the full Regulatory Outlook

Interested in hearing more? Expand to read the other articles in our Regulatory Outlook series

View the full Regulatory Outlook

Regulatory law affects all businesses.

Osborne Clarke’s updated Regulatory Outlook provides you with high level summaries of important forthcoming regulatory developments to help in-house lawyers, compliance professionals and directors navigate the fast-moving business compliance landscape in the UK.

Expand
Receive Regulatory Outlook each month

A round-up of forthcoming regulatory developments – straight to your inbox

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?