Artificial intelligence | UK Regulatory Outlook October 2024
Published on 30th Oct 2024
UK government launches the Regulatory Innovation Office | EU Commission – first plenary session and workshop on a Code of Practice for general-purpose AI | California will not enact controversial AI bill SB 1047
UK updates
UK government launches the Regulatory Innovation Office
The government has announced the launch of the new Regulatory Innovation Office (RIO), as promised in its manifesto. The RIO will initially support four technology areas, including:
- AI and digital in healthcare to accelerate the NHS's efficiency and patient care; and
- connected and autonomous technology including autonomous vehicles such as delivery drones.
See more in our Insight on the topic.
FCA launches AI Lab
The Financial Conduct Authority (FCA) has launched an AI Lab, designed to help firms overcome challenges in building and implementing AI solutions. It will provide AI-related insights, discussions and case studies, helping the FCA understand AI risks and opportunities.
UK government's Industrial Strategy recognises the importance of AI
In its recent Industrial Strategy Green Paper, the government has included "Digital and Technologies", which includes AI, as one of eight areas that it wants to focus on growing.
The strategy highlights the AI Opportunities Action Plan, led by Matt Clifford (see this Regulatory Outlook), that "will propose an ambitious approach to grow the AI sector and drive responsible adoption across the economy."
The government is consulting on its strategy with responses welcomed by 24 November 2024.
EU updates
EU Commission – first plenary session and workshop on a Code of Practice for general-purpose AI
On 30 September 2024, the European Artificial Intelligence Office (AI Office) kicked off the process for the creation of the first Code of Practice for general-purpose AI (GPAI) models under the EU AI Act by convening the first plenary session.
The online session was attended by close to 1000 participants, including those from general-purpose AI model providers, downstream providers, industry, civil society, academia and independent experts. At the session, the AI Office presented a preliminary overview of the results of its multi-stakeholder consultation on the code of practice. See this Regulatory Outlook for background and details of the drafting process.
The first workshop on the code of practice took place on 23 October. The attendees included the GPAI model providers who discussed the content of the code with the chairs and vice-chairs of the code. The first workshop covered:
- systemic risk assessment, technical mitigation and governance; and
- transparency and copyright-related rules.
The first draft code is expected in November 2024.
Study on AI liability directive proposes to transform it into a software liability regulation
The EU Parliament's Think Tank has published a complementary impact assessment on the Commission's proposal for a directive on adapting non-contractual civil liability rules to AI (the AI liability directive). The proposed directive only reached an early stage in the legislative process and was not enacted before the EU Parliament elections in June 2024. Whether or not to continue this legislation will be a decision for the new Commission.
The report outlines some recommendations, among other things, to:
- align the concepts and definitions in the proposal with those of the EU AI Act;
- classify generative AI systems under the new "high-impact" category since the current framework of the AI liability directive "does not adequately cover general-purpose AI systems";
- expand the scope of the AI liability directive "into a more comprehensive software liability instrument" (preferably a regulation) which covers not only AI, but all other types of software; and
- re-consider the implementation of the strict liability framework, particularly for high-risk AI systems.
US updates
California will not enact controversial AI bill SB 1047
California's governor, Gavin Newsom, has vetoed the state's controversial "Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act" (SB 1047), breaking his recent run of signing draft AI bills into the state's law (see this Regulatory Outlook for background). In a letter, the governor gives several reasons for not signing the act:
- It could give the public a false sense of security by focusing only on the most expensive and large-scale AI models, while ignoring smaller or more specialised AI models which could create dangers.
- It would not take into account deployment contexts, so would apply strict standards to basic functions, even if they were not being deployed into risky environments.
- The area is fast-moving, with "strategies and solutions for addressing the risk of catastrophic harm" "rapidly evolving" – noting that further legislation may be needed in future, but must be "based on empirical evidence and science".
At the same time, Mr Newsom has announced a range of related AI safety initiatives:
- Appointing experts on generative AI to help develop guardrails, focusing on developing an "empirical, science-based trajectory analysis of frontier models and their capabilities and attendant risks".
- Engaging academics to convene workers representatives and the private sector to "explore approaches to use GenAI technology in the workplace".
- Expanding the state's work assessing potential threats from the use of generative AI to California's critical infrastructure, such as energy and water, "including those that could lead to mass casualty events".
US FTC takes action against five companies for allegedly using AI to deceive consumers
The US Federal Trade Commission (FTC) has taken action against five companies for using AI in illegal ways which harmed consumers, as part of its new law enforcement sweep, Operation AI Comply. The actions taken are against:
- DoNotPay, a company which claimed that its AI service was "the world's first robot lawyer" and could generate legal documents and replace lawyers. The FTC alleges that the company could not substantiate these claims and did not test whether its AI chatbot's output was equal to the level of a human lawyer.
- A lawsuit against an online business opportunity scheme, Ascend Ecom, which allegedly claimed that its AI tools could help consumers earn thousands of dollars monthly through online stores. The FTC claims the scheme defrauded consumers of at least $25 million.
- A business opportunity scheme, Ecommerce Empire Builders, for falsely claiming to help consumers build an "AI-powered Ecommerce Empire" and make millions by participating in training programmes or buying online storefronts. The FTC says the company could not substantiate these claims.
- Rytr, which has marketed and sold an AI "writing assistant" – one of which generated consumer reviews. The FTC claims these AI-generated reviews often deceived potential consumers.
- FBA Machine, a business opportunity scheme that allegedly falsely promised consumers guaranteed income through AI-powered software for online storefronts. The FTC claims the promised earnings rarely materialised.