AI, Data and Transformative Technologies

Artificial Intelligence | How much attention should you pay to legal risks and data governance?

Published on 18th May 2020

RMT_business_woman

A sharp focus on the ethics of AI is generating impactful debates on how to regulate it, who should be liable for it, and the need to take care over the data fed into it. But how much attention is paid to legal risk by businesses that are actually using AI?

We asked the audience at our recent AI fundamentals webinar whether their businesses were currently using AI tools. Some 56% reported that they were, around 29% were not, and 15% were unsure. This was probably an unfair question, since anyone with a smart phone in their pocket is using AI. But there is no doubt that the use of AI by businesses is becoming more widespread. The current imperative to rethink our ways of working and reduce physical interactions and proximity in the workplace is drawing attention to automation, and boosting the commercial incentives to invest in digital transformation.

Further insight into the deployment of AI tools is provided by the recent report "AI Adoption in the Enterprise 2020" by tech skills consultancy O'Reilly, based on a survey conducted in December 2019. It covers three particularly interesting issues:

  • what AI is being used for;
  • the types of risks that are checked for; and
  • the extent to which data governance is being built into AI systems.

Where is AI being used?

The most common uses for AI were behind the scenes, in relation to research and development, and in IT systems. Front of house, almost 30% were using AI for customer services. The report does not elaborate on what it includes in the various categories, but "customer services" is likely to include chat bots – a "Can I help?" pop-up is now a familiar feature on many websites.

Marketing, advertising and PR was another popular area, with a little over 20% of respondents using AI for these functions. Individual customer profiles driving personalised offers or recommendations are a strong use case in this context. Data about a person's past activities can be combined with their current location, to anticipate what they might respond well to, or might be interested in purchasing next.

Management of operations, facilities and fleets complete the top five reported uses. AI, combined with Internet of Things sensors and data collection, can be used to optimise the asset management. Wear and tear in machinery can be monitored and compared to past patterns of deterioration to schedule preventative replacement of parts. The ordering of tasks can be automated and optimised. Unusual dips in output can be spotted, compared to what was predicted.

Interestingly, the lowest reported use of AI was in relation to legal tasks. AI-based tools for document review, for example, have been available for some time and continue to increase in scope and sophistication. However, usage is not always cost-effective for and remains an option rather than a default.

Risk management and compliance by design?

Although specific regulation for AI is still a developing area, well established law is often clearly in play. Nevertheless, the survey found a striking low level of concern about issues which clearly generate legal risk.

The risk of bias (typically derived from bias in the training data) leading to unfair, potentially illegal discriminatory outcomes is well known; but less than half of respondents to the O'Reilly survey checked for it. Privacy is a similarly highly developed area of law, yet barely 40% of respondents tested their systems for privacy risks. This is despite the fact that in the EU there are heavy fines for non-compliance with data rules, and an expectation of adherence to the overarching principle of "privacy by design". Even security vulnerabilities were only checked for by around a third of respondents.

The most commonly checked-for risk was unexpected outcomes or predictions. Again, there is a legal angle here, as legitimate "edge case" outcomes can generate a lack of clarity in relation to liability, if they were not foreseeable.

Data governance

The report also discusses the extent to which businesses have formal data governance policies to back up their AI projects. The report observes that this is typically treated as "an additive rather than an essential ingredient". A quarter of respondents expected to have a data governance policy by 2021 and a further third within three years.

Data governance is a wider concept than data privacy compliance. It is important because AI systems depend on data, and their outputs are shaped by it. The report makes the point that data governance (including provenance, lineage, and consistent definitions) feeds into transparency around how an AI tool functions, and what might have caused certain outputs – playing into accountability and safety of the system. It goes on to make the point that businesses often don’t appreciate the importance of good, well managed data resources for successful AI tools until they have direct experience of the issues that can arise and the extent to which, as the report puts it, "data quality can create AI woes".

Osborne Clarke comment

Where emerging technology doesn't fit neatly within existing legal frameworks, this can be seen as a risk or an opportunity. The problem with taking a relaxed approach is that retrofitting compliance can be costly or even impossible.

Building regulatory considerations into the design of a tool often proves prudent and cheaper in the longer term. Where legal obligations are clear – for example, in relation to data or discrimination – they can be taken into account. Where there is uncertainty, well informed legal advice based on an understanding of current policy-makers' thinking can help to find a commercially acceptable level of risk.

The level of attention given to these risks may change when the legal position around liability for AI tools becomes clearer. As we have previously discussed, the European Commission plans new legislation intended to map out where liability falls, in order to ensure that those harmed as a consequence of an AI tool have effective remedies which are not overly onerous or expensive to obtain.

We can ensure that legal risk management and regulatory compliance are worked into your data strategy and governance, and designed into your AI tools. If you’d like to discuss these issues further, please speak to one of the authors or your usual Osborne Clarke contact.

Share

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?