What does the UK's white paper on AI propose and will it work?
Published on 31st Mar 2023
Businesses have until 21 June to respond to the UK government's consultation on proposals for regulating AI
The UK's long-awaited white paper on artificial intelligence (AI), published on 29 March, has indicated that the government will issue a non-statutory definition of AI for regulatory purposes and a set of high-level overarching principles.
The white paper, "A pro-innovation approach to AI regulation", explains the government's proposal that existing regulators will work the principles into the application of their existing powers and provide the detail. No new laws are planned to regulate the technology, as was signalled in the government's policy statement of July 2022.
Central government support will monitor and evaluate the emerging regulatory landscape around AI, identifying any gaps in regulatory coverage and supporting coordination and collaboration between the regulators.
Defining AI
As flagged in the July policy statement, rather than crafting a detailed definition of AI (as is done for the National Security and Investment Act 2001), AI tools subject to regulation will be identified by their key characteristics of being adaptive and autonomous.
The definition makes no reference to particular types or classifications of AI (such as the prohibited and "high risk" categories of the EU's but does refer to systems that are trained and that infer patterns across data. The clear inference is that the focus is on machine learning and deep learning systems and the outcomes that these systems can produce. The white paper anticipates that regulators will expand on this loose definition with a more detailed interpretation in their area.
Principles for trustworthy AI
Five overarching principles to shape regulation are put forward for consultation:
- Safety, security and robustness. AI systems should be safe to use, robust and resilient in operation and secure in terms of cybersecurity. Technical standards may be needed, and regulators may wish to require regular testing. They may decide to issue guidance on how to achieve this objective.
- Appropriate transparency and explainability. Appropriate information needs to be provided to users about the AI system that they are using. Where possible, the explanation should cover how the AI system works and how it takes decisions. The white paper notes that this principle should also ensure that regulators can obtain sufficient information about an AI system to perform their functions. However, it notes that explainability can be very difficult to achieve from a technical perspective and acknowledges that transparency and explainability are not absolute requirements but should be applied proportionately to the risks in play. Again, technical standards may be useful in this area.
- Fairness. Since fairness is an objective in many areas of law and regulation, the government anticipates that joint guidance from regulators on this principle would be useful. Where an AI-driven decision could have a significant impact on someone (such as insurance, credit-scoring or in recruitment), it should be justifiable.
- Accountability and governance. There should be clear lines of accountability for an AI system and robust governance. The white paper recognises the complexity of some AI supply chains and notes that impact assessments and algorithmic audits can support the identification and mitigation of risks. It mentions elsewhere that the risks flowing from deciding not to adopt AI should be included in an AI risk assessment. Once again, technical standards could provide benchmarks to shape best practice in this area. It puts material emphasis on this principle as a way to ensure that businesses demonstrate that they are acting responsibly in their deployment of AI systems.
- Contestability and redress. It should be possible to contest a harmful decision or outcome generated by AI. Regulators should create clarity around their existing mechanisms for contestability and redress. The white paper does not anticipate creating any new legal rights or new options for redress.
By taking this principles-based approach, the government intends that the regulatory framework will be agile and proportionate. Initially, these principles will be published informally. The government is consulting also on whether, in due course, a statutory duty should be put on regulators to have regard to them.
A complex regulatory patchwork
There are many regulators in the UK covering a wide variety of areas. Some oversee and police sector-specific frameworks such as the Financial Conduct Authority, the CMA's nascent Digital Markets Unit, the Medicines and Healthcare products Regulatory Agency, media regulator Ofcom or the utilities regulators. Others deal with specific areas of law on a cross-sector basis, such as the Competition and Markets Authority, the Information Commissioner's Office, the Advertising Standards Authority or the Equality and Human Rights Commission.
Simply understanding the patchwork of regulatory powers that could be used to address AI is a significant task that would need to evaluate for each regulator:
- the perimeter of its jurisdiction;
- the extent to which it is able to impose obligations on those within its remit without further legislation;
- whether it can act ex ante, to shape conduct before anything has gone wrong, or only ex post, reacting to harm or an infringement;
- the extent of enforcement tools, corrective measures and sanctions at its disposal; and
- the form of any redress that it can offer to injured parties.
The white paper makes nods to various different regimes but doesn’t provide an overview of this complicated and uneven patchwork of regulatory powers. There is no indication that this landscape has been "mapped" to any degree of detail.
It points out, moreover, that foundation models (such as sophisticated chatbots, translation engines or image generators) are even more complex to map to the existing regulatory system because their core function can be put to numerous different applications across numerous sectors.
Central monitoring
Creating cohesion and coherency across this complicated landscape will need a great deal of effort. Many regulators are accustomed to overlaps in jurisdiction and the need to collaborate and align policy and action. But AI is a particularly complicated field because it is pervasive across all aspects of the economy, generating risks from slight (writing a poem or powering a photo filter) to grave (disinformation, malware, deepfakes etc). Moreover, as the white paper acknowledges, there are very different levels of skill, scale, resourcing and organisational capabilities across the regulators that might be called on to take action in relation to AI.
Central support from government is envisaged in relation to areas including the monitoring and evaluation of the new regime, identifying barriers (such as the scope of the regulatory remit and insufficient powers and capabilities) and inconsistencies, horizon-scanning for emerging new AI risks (including monitoring foundation models), developing sandboxes, promoting education and awareness around AI for businesses and consumer around AI regulation and risk, and ensuring interoperability and alignment with international regimes.
However, there is no indication yet of which body will take this role. It appears that it will sit within government initially but may move to an independent body in due course – perhaps an AI-specific UK regulator is not entirely off the cards in the longer term.
AI liability
The white paper does not seek to take action in relation to the allocation of liability for AI. It references the complexity of the AI supply chain and acknowledges that current legal frameworks, combined with implementation of the high-level AI principles, may end up allocating legal responsibility for an AI tool in a way that is not fair or effective. For the time being, the plan is simply to monitor developments. Legislation is not ruled out in the longer term if the allocation of liability requires correction in any particular area.
As regards the ability of an injured party to sue for damages (or other remedies) for harm caused by an AI system, the white paper is silent. In the EU, in contrast, the proposed AI liability directive aims to facilitate the exercise of private rights of action. The directive will partially reverse the burden of proof in relation to causation to address the complexity of such litigation.
Wait and see
The white paper proposes a completely different outcomes-based approach to the EU's AI Act (a complex "top down" piece of regulation, still in the legislative process after nearly two years). There are no outright bans proposed for the UK, no detailed compliance requirements on data, documentation, testing and certification, no new regulator and no new powers to impose fines or take action. Instead, the UK is planning a "bottom up" approach, devolved to regulators.
The risk with this approach is that a problematic AI system will need to present itself in the right format to trigger a regulator's jurisdiction, and then the regulator will need to have the right enforcement powers in place to take decisive and effective action against the harm and to offer redress to the harmed party. As regards sanctions and redress, in particular, some regulators have hefty powers to impose deterrent sanctions and to correct harm, but many do not. As noted, this uneven landscape does not appear to have been mapped: it seems that the government is simply going to wait and see what happens.
What do businesses need to do?
Fundamentally, the lack of any detail at this stage, the risk of overlapping regulatory jurisdictions and the need for coordination together mean that there is little clarity on what is expected. Standards and AI risk assessments are both likely to play an important part in this field, as are governance frameworks.
For the time being, businesses developing and using AI need to make sure that they understand existing and developing guidance on AI from regulators relevant to their business (whether sectoral regulators or issue-specific regulators such as the Information Commissioner's Office or Competition and Markets Authority). Those intending to supply AI, or products, services or systems that incorporate AI, into the EU may choose to prioritise their preparation for compliance with the EU's AI Act, where the requirements, enforcement powers and potential sanctions are much clearer (even if not yet finalised).
There is nothing in the white paper to suggest the EU's ambition of setting the global gold standard for AI regulation is under threat. However, both regimes will need to be monitored as the post-Brexit divergence in the UK approach is significant.
The consultation on the white paper closes on 21 June 2023.
If you would like support in responding to the white paper or would like to discuss any of the points raised above in more detail, please do not hesitate to contact the authors or your usual Osborne Clarke contact.