Artificial Intelligence | UK Regulatory Outlook November 2024
Published on 27th Nov 2024
ICO publishes recommendations on use of AI tools in recruitment | EU AI Act: first draft of the General-Purpose AI Code of Practice published
UK Updates
ICO publishes recommendations on use of AI tools in recruitment
Developed following audit engagements with developers and providers of AI sourcing, screening and selection tools used in recruitment, the Information Commissioner's Office's (ICO) recommendations apply to both AI providers and recruiters, and include case studies and examples of good practice.
Key recommendations covered:
- Fairness of processing.
- Transparency and explainability.
- Data minimisation and purpose limitation.
- Data protection impact assessments.
- Drawing the line between controller/joint controller and processor roles.
- Explicit written processing instructions.
- Appropriate lawful basis.
The ICO has also published key questions to ask before procuring an AI tool for recruitment.
UK government creates AI risk resource for businesses
The government has launched a new AI assurance tool, AI Management Essentials or AIME, still in draft, to help businesses assess and mitigate AI risks. The idea is to provide a one-stop shop for information on the actions they can take to identify and mitigate the potential risks and harms posed by AI.
Specifically, the platform aims to collect together guidance and practical resources for businesses to use to carry out impact assessments and evaluations of new AI technologies, and review the data underpinning machine learning algorithms to check for bias.
AIME is subject to public consultation, which is open until 29 January 2025.
UK government experiments with AI chatbot to help people set up small businesses
The government is working on its own AI-powered chatbot, GOV.UK Chat, to allow people to obtain automated, personalised advice on the business rules, tax and support applicable to them. The chatbot is linked from 30 of GOV.UK's business pages. The project is currently at the trial stage and limited to 15,000 business users who will be able to ask it questions. The government will use the trial's results to shape further development of the chatbot, with the next step being potential larger-scale testing.
UK government releases Responsible Innovation tool
The Model for Responsible Innovation is described as a practical tool intended to help teams innovate responsibly with AI and data, and can be used by teams in the public sector, and by private sector teams developing AI intended to be used for a public sector purpose, or which has a "significant societal footprint". Use of the model will see experts from the Department of Science, Innovation and Technology (DSIT) run red-teaming workshops with participants, mapping the systems under development against DSIT's model to identify AI risks and priorities.
Ofcom explains how the Online Safety Act 2023 will apply to generative AI and chatbots
See Digital Regulation section.
New inquiry into links between algorithms, generative AI and the spread of harmful content online
See Digital Regulation section.
ASA updates on its regulation of AI use in advertising
See Advertising and marketing section.
EU updates
EU AI Act: first draft of the General-Purpose AI Code of Practice published
This draft Code of Practice, prepared by the panel of independent experts, has now been published. The Commission's publication notes stress that this is very much a first draft, and that drafting will be an iterative process, with three further rounds of input and drafting envisaged. See this Regulatory Outlook for details of the drafting process.
There is still a fair amount of detail, including on:
- Content of policies for complying with copyright laws (including details of steps taken to ensure that training does not involve copyright infringement).
- The type of information required to be provided in acceptable use policies.
- The types of detailed information to be included to comply with documentation and transparency requirements, in particular about model creation, training, testing and risks.
- Factors considered as likely to mean that an AI system is one which creates a systemic risk.
- Safety and security frameworks.
- Continuous system evaluation, risk assessment, and risk mitigation.
The Commission also published a Q&A explaining the approach taken in the draft code. Interestingly, it emphasised the AI Act's recognition that the compliance burden should be lighter for small and medium enterprises (SMEs), with simplified ways of complying that would not cost so much that they would discourage the use of general purpose AI models by SMEs.
The plenary participants (the 1000 or so companies and organisations who signed up) have until 28 November to submit comments.
EU AI Act: consultation on prohibited AI and AI System definition
The EU Commission's AI Office has published a consultation on prohibited AI practices, and the definition of "AI system", inviting participants to provide practical examples of AI systems which may come within the various categories, and to explain which areas they need guidance on, and why. Responses will be fed into the processes of drafting guidelines, which are due to be published in early 2025.
The consultation is open for responses until 11 December 2024 and early submissions are "strongly encouraged".
EU AI Act: EU Commission's brief on the state of play of the Act's harmonised standards
The European Commission's Joint Research Centre has published a policy brief on the state of play of the harmonised standards for the EU AI Act. The brief provides key characteristics expected from the upcoming AI standards, currently being drafted by the European standardisation organisations, led by CEN-CENELEC.
The EU AI Act sets out the essential requirements that high-risk AI systems must meet to guarantee their safety. The technical standards will define concrete approaches that can be adopted to meet these requirements in practice and support AI providers with compliance. When finalised and published in the Official Journal of the EU, the harmonised standards will provide a presumption of conformity with the relevant legal obligations for AI systems developed in accordance with them. The provisions for high-risk systems under the AI Act will start to apply from August 2026 (see our Insight).