Digitalisation

Debate continues over the pros and cons of regulating artificial intelligence

Published on 27th Jul 2021

What are the issues of most concern for businesses in the EU Commission's recently published AI Act proposals?

DT_supercomputer

Osborne Clarke's second roundtable on artificial intelligence (AI) explored the reactions of a range of businesses to the European Commission's proposals for regulating AI (summarised here). Our virtual gathering included representatives from the UK, Netherlands and USA, stretching across the automotive, energy, education, professional services and tech sectors. As with our first AI roundtable, the discussion ranged far and wide.

Simple approach v complex reality

A notable difficulty with the Commission's draft regulation on AI (as proposed, its "AI Act") is that it assumes that an end-to-end "provider" of an AI system can be identified and fixed with liability. The AI Act defines such service providers as the person or organisation that developed the system or had it developed.

However, the AI industry is anything but simple. While some AI is developed by a single organisation from scratch, it is very common for modular development tools to be used (so-called MLOps), which offer readymade component parts for particular functions which can be combined into a whole. Many of the major cloud service providers offer suites of AI tools or AI components as part of their "as a Service" offerings. Component parts are also on an open-source basis. These tools and building blocks can be a stand-alone item, or might be acquired alongside development consultancy services. In short, it cannot be assumed that there is a single developer or even that a single person has developed the AI on an end-to-end basis.

This is an important point because the draft legislation places responsibility for regulatory compliance on the shoulders of "the provider". There is a risk that the oversimplified approach of the draft legislation will cause duplication of compliance, rather than an effective and proportionate distribution of responsibility through the AI supply chain. Essentially, if the regulatory framework doesn't map to the underlying infrastructure of the sector, then the overlaid services will not be able to achieve compliance easily.

Bifurcation of compliance and liability

The complexities of compliance have not been helped by the continuing uncertainty over liability for AI – meaning, more specifically, machine learning and deep learning systems. The software for such systems is not hand-coded line by line. In contrast, algorithmic models are built to auto-complete and continuously self-adjust by reference to the data that they parse. This means that the developers of such systems do not necessarily know why input A has generated output B. In liability terms, this gives rise to the so-called "black box" paradigm. Legal liability typically depends on there being a foreseeable chain of causation between the fault and the harm caused – in simple terms, an identifiable link which enables liability to be traced back to a particular actor. The black-box nature of much deep-learning AI prevents the establishment of these chains of causation, which, in turn, prevents the identification of a culpable party.

The Commission was initially considering addressing the complexities of black-box AI in tandem with its development of the AI regulation (as we discussed here). However, the two projects are no longer aligned and any liability reforms are still awaited (and were not included in the Commission's recently published proposals for updating the product safety regime, discussed here).

In the meantime, AI suppliers, developers and providers have to use contractual frameworks to mitigate uncertainty around liability.

Regulation or not?

Jurisdictions such as the USA and (so far) the UK are not taking a similar approach to the EU with top-down, cross-sector regulation of AI. The EU considers that regulation is needed to bolster public trust in AI, which will in turn foster growth. The USA, by contrast, is focusing on boosting the sector through funding and by making more public sector and government-held data available for training AI. The UK's National AI Strategy is expected to be published in the next few months. Legislation or regulation of specific issues or specific sectors may follow but the strategy is not expected to include cross-sector horizontal regulation.

Why is the EU taking such a different approach?

The EU cannot boast a global leadership position in tech markets for hardware, software, or infrastructure (although it is trying to reduce European reliance on overseas providers through its "EU tech sovereignty" policy). However, it can take the global lead for regulating technology. Just as the General Data Protection Regulation set the gold standard for privacy, so the Commission is seeking a similar role in other areas of digital regulation. The AI (and other) proposals can be seen in that context.

Overarching EU-wide legislation does at least offer simplicity of compliance. In the US, by comparison, regulation can be issued at federal, state or district level. Compliance with a varied patchwork of local legislation can be extremely time consuming and burdensome – and can hamper building scale.

Unrealistic data governance standards?

Under the proposed regulation, "high risk" AI systems must meet six detailed areas of compliance covering data governance and various aspects of transparency, safety and control. The proposed obligations around data transparency stand out for their ambition. For example, training, validation and testing data fed into an AI system "shall be relevant, representative, free of errors and complete". This is a very high bar. Moreover, the wording is absolute – there is no tolerance or margin for error in meeting the requirements.

The requirements are also arguably unrealistic. As discussed above, training AI is an iterative, continuous process. As such, it may never be possible to confirm that the data set is complete. "Free of errors" is similarly a very difficult standard to meet for a vast dataset. Many datasets have been accumulated over a number of years, perhaps using an automated collection process or techniques such as data scraping, or gathered together from various sources. Checking the accuracy of every single data point would not only risk being be disproportionately burdensome, but might not even be possible if there is no clear lineage back to the source of the data.

Regulatory intrusion vs secrecy

The draft regulation requires providers of high-risk AI to grant access for the regulator to their training, validation and testing datasets, to the AI system's algorithmic model, and to any of the extensive documentation required to be drawn up under the regulation. These are not investigative measures, where an infringement or non-compliance is suspected, but simply to verify compliance.

The Commission's proposals would amount to a notably intrusive regime – particularly given that other major jurisdictions are not currently planning to regulate AI in a similar way at all. Some have commented that the degree of regulatory intrusion may even deter non-EU businesses from putting their products onto the EU market, if there are alternative, less vigorously regulated global markets to expand into.

In a similar vein, intellectual property protection for data and datasets is currently limited. In addition, copyright protection for AI models is difficult as the dynamic, continuously evolving nature of machine learning systems does not fit easily with the core concepts of copyright protection. Keeping data and the algorithmic model confidential may be the most effective way to protect and preserve the value of such systems, but compliance with the draft regulation could compromise that approach. Again, there is a misalignment with the realities of the AI industry.

What next?

We do not expect the draft AI regulation to pass through the legislative process without material debate and changes to its provisions. Significant lobbying of the EU institutions is underway, seeking to influence the final shape of this legislation. Such exchanges will no doubt include deeper education for the legislative bodies on how the regulation could be designed to correspond more effectively to the complexities of both the technology and the industry. In particular, the Commission needs to find a proportionate compromise between creating conditions in which high risk AI tools can be trusted by the public, and creating a regime that is so onerous and intrusive that AI providers have no incentive to enter European markets.

Share

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?