Tech, Media and Comms

Regulating unreality | The legal implications of "deepfakes"

Published on 15th Aug 2019

Deepfakes represent a major challenge in tackling fake news and upholding trust in the truth. But they also present unique compliance risks for businesses. How can existing legal and practical tools be used to combat deepfakes, and what changes may be needed to regulate unreality?

RMT_business_woman

What are deepfakes?

Deepfakes are videos or audio clips which have been manipulated to make it appear that someone is doing or saying something which they did not in fact do or say. An AI tool is trained on images of a target person's face and is then able to insert the target person's face into content material of someone else. The technique first emerged mainly in relation to pornography, combining a celebrity's face with someone else's body. It has since extended into mainstream areas, including fake content featuring political figures. Tools to generate deepfakes are readily available and programming expertise is no longer needed to create them.

Although this is not yet a widespread problem in the business context, the societal and legal challenges that deep fakes and synthetic reality pose are clear and not readily resolved.

Recently, we were delighted to welcome clients to listen to Lillian Edwards, Professor of Law, Innovation and Society at Newcastle University) speaking about artificial intelligence-generated deepfakes and "synthetic reality" at an event hosted by the Alan Turing Institute and the Barbican. We have summarised below some of the key themes from the evening's discussions.

Deepfakes in the business context

The best known examples of deepfakes tend to feature well known faces, but the same techniques could be used to synthesise corroborating evidence for false allegations against private individuals, to sabotage professional reputations, or to generate fake authorisations or instructions. This could be a concern in the business environment if, for example, employees are faced with allegations based on apparently strong evidence of misconduct which is in fact fake, or if people are deceived into believing that have been given an authorisation or instruction from senior management.

Tackling the problem with existing law

The existing law offers some tools for controlling or outlawing this material – for example, the original material might be protected by copyright, the use of personal information might mean that GDPR rights are in play, the synthetic content might be defamatory, or the requirements on online platforms to take down illegal content might be engaged. The proposed new legislation to create a duty of care in the UK around "online harms" may also apply, and the UK government is looking at the issue of "fake news" more generally.

On the other hand, it is not always clear how these rules – or the exceptions to them – apply to a specific case, particularly as regards platforms using algorithms to detect inappropriate content. Generally, though, enforcement of such rights requires an individual or organisation to take action itself and could only deal with specific instances, rather than addressing the problem as a whole.

Not all deepfakes are a problem

An outright ban on the tools used to create deepfakes would be unworkable as it would stop legitimate uses of these techniques – and would come into conflict with freedom of expression norms. Some examples of deepfakes are created as political satire, while synthetic content has long been created in the entertainment industry as special effects for film and television etc. Filters are often used in social media contexts to synthesize or alter images – whether to add cat features to a face or to whittle a waistline.

One possibility for legislation (particularly for any criminal prosecution) could be to require proof of malicious intent or an intention to deceive, or a limitation that only content which is "inherently humiliating" should be illegal, etc. However, such concepts would be difficult (near impossible?) to work into technical/automated solutions for spotting and taking down deepfake content.

Deepfakes and legal fictions

Of course, false evidence is not a new phenomenon in the law, nor is evidence whose veracity is not certain. Legal fictions, rebuttable presumptions can be used in these situations: X is taken to be true unless evidence is presented to show that it is not. Other legal techniques include having a hierarchy of sources of evidence, or a requirement for corroboration. There is scope to use these techniques when faced with "evidence" which might in fact have been synthesised or represent an altered version of the true content.

Can we use technology to tackle this?

Technical solutions have been proposed to deal with faked content, including an indelible digital "watermark" of some form on the original material, or digital fingerprinting of videos. Such an approach would probably result in an ongoing tech battle of arms as one side works to increase protections and the other to undermine them. Another approach would be to use digital forensics to probe the provenance of content – this is likely to become ever more important, although again could simply become an tech arms race to get ahead of the other side.

Another solution could be to keep an authenticated "true" version of everything – although this would carry an alarming risk of facilitating pervasive surveillance structures, not to mention potential disputes about what should be badged and locked in as the true version. The "balance of harms" approach is significant here: the deepfakes problem is not considered to be sufficiently widespread yet to justify making inroads into privacy and other important rights.

Using good practice to make it harder to be duped

In some cases – particularly in a business context – fake content could be countered with well-designed compliance and authorisation procedures. For example, public recordings of CEOs have been used to replicate their speech patterns, and money has been stolen by giving instructions over the phone which appear to come directly from the CEO to the finance team, asking them to transfer money. But dual authorisation protocols – and a compliance culture that does not tolerate them being overridden – might have prevented these thefts.

It is worth keeping in mind that the GDPR requires organisations to have "appropriate technical and organisational measures" to safeguard personal data, including to prevent unauthorised access and changes. Businesses are likely soon to have to show that such systems are designed to be robust in the presence of sophisticated fakery.

Wider societal challenges – and solutions?

One of the knock-on effects of the increasing prevalence of false material is that it becomes easier to dismiss genuine content – "plausible deniability". It becomes a concern for society as a whole if nothing can be trusted, everything is questioned, and it becomes easy to claim that true material is suspect. It was commented that this could lead to people turning back to paid-for quality journalism – although there is also a risk that personalisation and the "media bubble" could result in people paying a premium for "their" truth, with the attendant risks of polarisation and skewed understanding.

The wider issue of societal norms is important here. As is the case with much advanced technology, there is not yet a settled consensus in society about what the related rights and wrongs are of using these digital tools. As the UK government recently noted in its White Paper on “Regulation for the Fourth Industrial Revolution", advanced technology often requires a broader dialogue and engagement than the usual public consultation processes. It may be that part of the effort to tackle the problem of deepfakes and the risks of a "post truth" culture should be to seek to build a consensus that they are wrong.

Share

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?