Debate ranges wide on legal and regulatory questions for artificial intelligence
Published on 9th Apr 2021
With AI-specific legislation expected from the EU in the next month, what are the issues that are currently of most concern for businesses?
The power of artificial intelligence is such that its applications stretch far and wide across all commercial sectors. A conversation with almost any business will generate a fascinating discussion. Our recent client roundtable considering legal issues around the deployment of artificial intelligence (AI) was no exception. The roundtable was held to mark the publication of the second edition of Osborne Clarke partner John Buyer's book: "Artificial Intelligence – The Practical Legal Issues".
Ethical and societal issues
While AI is currently less powerful than some imagine, in other ways it is much more influential than may have been anticipated. An issue of ethical and societal concern is the extent to which AI is contributing to a gradual chipping away of the value of truth. Fake news, deep fakes, and recommendations-driven media bubbles are broad social challenges which AI can enable or compound, or help to address.
The importance of data
Discussions around AI inevitably turn to data as its raw material. Businesses wishing to exploit AI need to develop their "data consciousness", to ensure that they manage their own data effectively and ensure that it is protected where possible with intellectual property rights and contractual provisions.
That said, the value of data is often realised through sharing it. Optimising a business's approach to data will involve balancing the value generated by keeping it confidential – trade secrets or information which drives a competitive advantage are clear examples – and the value released by sharing it in a considered and carefully structured manner. Problems often arise where businesses start to try to exploit their data but discover that contractual frameworks around the data are too restrictive and did not lay down a flexible basis for how it might be used in the future.
Discrimination and bias
The risks of poor data generating poor outcomes from an AI system are well known.
Biased data, in particular, can be a significant issue. Where bias skews outputs in a way which disadvantages one of the legally protected characteristics – such as race, gender and religion – bias can become illegal discrimination. Care is needed in the employment context in particular. The "black box" nature of much AI decision-making means that there may be little or no human insight into why a particular decision was generated. This can generate problems when seeking to provide evidence that apparently illegal discrimination is in fact objectively justified.
There is huge growth in the use of AI as a recruitment tool, whether in the form of robot interviewers, sentiment analysis of an interviewee's facial expressions, or to filter applications for the best candidates. AI is also driving work allocation systems, particularly for gig platforms, as well as undertaking performance appraisals. Clearly, biased outcomes in the human resources context are a significant concern and legal risk.
In addition, the fact that these tools will typically be processing personal data means that the General Data Protection Regulation and its UK equivalent will be engaged.
EU AI regulation expected
The EU Commission is expecting to publish proposed new legislation to regulate AI within the next few weeks, following on from its white paper of February 2020. It is expected to take a "top down" approach, and may outlaw or restrict some forms of AI, such as live facial recognition systems in public spaces.
This legislation will not, of course, apply in the UK, but whether the UK will decide to follow a similar path, or take a more "mid-Atlantic" approach remains an open question. Many UK businesses trading into the EU will need to take account of the EU framework in any case.
Update: since writing this article, the proposed regulation has been published – read more here.
The UK position
The approach to regulating AI to date in the UK has been to let market forces and general principles shape the legal landscape, unless and until it is clear that specific regulation is warranted. As a general purpose technology, there is no lack of existing law which can bite on AI and its applications. However, this is resulting in a somewhat piecemeal approach, with different regulators dealing with different issues, and the risk of creating a patchwork of rules without a common vision.
Some regulators are investing considerable effort in understanding AI and other advanced technologies. The UK Competition and Markets Authority has recently published a research paper "Algorithms: How they can reduce competition and harm consumers", along with a (now closed) call for evidence, which includes consideration of AI.
The liability challenge
The liability issues for AI, particularly where a harmful output was not anticipated by its developer or operator, are well rehearsed.
For businesses which provide tools and platform services to facilitate the development of AI by third parties, an additional layer of complexity arises. To what extent could that service provider be liable if the resulting third-party-developed AI causes harm? The chain of responsibility can be complex and there is not yet much clarity about how these questions and issues might be resolved.
Certainty about liability is highly desirable. It crystallises risk and can drive more responsible behaviours from suppliers if they know they will be on the financial hook for problems. Of course, the courts will deliver clarity over time as issues and uncertainties are litigated, but this is undoubtedly an area where properly considered legislation would save businesses time and money. The Commission's proposed regime for AI in the EU may assist in providing clarity in this regard.
If you would like to discuss any of the issues raised in more detail, please do not hesitate to contact the authors or your usual Osborne Clarke contact.
This Insight reflects discussions at a client roundtable held to mark the publication of the second edition of John Buyer's book: "Artificial Intelligence – The Practical Legal Issues" (available here).