AI, Data and Transformative Technologies

How to implement contract safeguards ahead of upcoming European AI laws

Published on 3rd Jan 2024

The anticipated AI Act is in draft but already shaping contracts concerning the development and licensing of AI systems 

Artificial Intelligence KI AI

It is now a certainty: the European law on artificial intelligence, the AI Act, is coming. With the announcement of an agreement in the trilogue negotiations on 8 December 2023, it is likely that the AI Act will be passed before the end of the legislative period and the European elections in June 2024. Although the exact content of the AI Act is not yet known, the drafts of the European Commission and Council as well as information regarding the agreement have become available via parliamentary representatives – and there already is a great need for action.

The EU legislator has set itself the goal of introducing new  regulatory standards for the use of AI systems by comprehensively weighing up the opportunities and risks of applying AI  in various fields. In addition to the more general AI Act, a specialised  AI Liability Directive is to be adopted. These two legislative projects will largely change the legal framework conditions for manufacturers, retailers and users of AI systems. The  expected AI Act is already unfolding and having an impact on IT contract law.

The general framework: the AI Act

The AI Act sets the basic legal framework for operating AI systems in different economic sectors and under different conditions of use and will be the world's first regulatory approach with an explicit reference to AI. The EU is doing so by pursuing a risk-based approach. While some AI systems will be prohibited by default due to risk considerations relating to threats to security and fundamental rights, the permitted AI systems will be categorised according to the risk potential of their specific use. As a result, those AI applications that are categorised as so-called high-risk systems – at Article 6 et seq. in the draft AI Act – are  subject to the most comprehensive provisions. High-risk systems are subject to far-reaching obligations in terms of compliance and risk management arising from the draft AI Act.

Title III, chapter 2 of the draft AI Act lists requirements for high-risk AI systems that must be met by manufacturers and developers of these systems. Accordingly, high-risk AI systems must already be designed in such a way that a risk management system is set up at the start of their development. Specific measures must be defined for: identifying the risks, assessing the risks, and eliminating and reducing the risks. In  addition, the systems must be designed in such a way that they can be adequately supervised by a natural person, as well as that they can be operated transparently and that their operation can be documented. There are also strict requirements concerning the training data used and the design of the systems, as they must be robust, accurate and (cyber)secure.

These requirements concern the high-risk AI system. Only title III, chapter 3 of the draft specifies the extent to which the provider, as the last link in the value chain before the user of the AI, is responsible for these requirements. In case the provider does not itself constitute the manufacturer of the AI, it seems advisable to agree contractually on unlimited liability of the AI supplier to be prepared for the event that the AI does not fulfil the requirements specified in title III, chapter 2 of the draft legislation.

Such contractual provisions could follow the unlimited liability for breaches of "data protection laws" that is customary in the industry. This could go hand in hand with contractually agreed indemnification clauses at the expense of the AI supplier in the event of violations of the above-mentioned requirements for the AI itself. If the provider offers an AI system, that an external provider has developed and which falls under the definition of a high-risk AI system, the fulfilment of the above-mentioned legal requirements for the high-risk AI system in question should already be included in the product description and thus become a main contractual obligation. Without a contractual obligation of the manufacturer, the provider who commissions the AI solution could already be accused of a breach of Article 16 of the draft AI Act ("Providers of high-risk AI systems must ensure that their high-risk AI systems fulfil the requirements in Chapter 2 of this Title.").

However, it the provider of a high-risk system remains the addressee of obligations under title III chapter 3 of the draft AI Act. By contractually shifting the obligations to the manufacturer or supplier, the provider cannot exempt itself from those, meaning that the risk of enforcement remains with the provider. One more reason to ensure that only the controllable risk of a breach of the draft AI Act has to be borne by the provider. If we take, for example, the obligation under article 20 of the draft AI Act to retain the logs that the AI automatically creates, it is obvious that a provider cannot fulfil this obligation if the AI system is not designed in such a way that it can fulfil the recording obligations under article 10 of the draft AI Act. Although the provider remains externally responsible here as well, internally the provider regularly has no influence on whether these requirements can be fulfilled and should, therefore, take contractual precautions.

This can be further illustrated by the following example: A company expands its software solution it sells to doctors and hospitals by integrating an AI system to support diagnostic processes. The company has this system developed and trained by an AI start-up. The development agreement with the AI developer should include provisions on the conformity with the AI Act as well as provisions on who is responsible for compliance regarding the requirements for medical devices. Classification as a medical device within the meaning of Regulation (EU) 2017/745 on medical devices is the basis for the categorisation as a high-risk system in accordance with Article. 6 paragraph 1 lit. a of the draft AI Act.

The agreement with the AI developer should also explicitly state that the AI developer is solely responsible for matters of compliance with the obligations of article 6 et seq. draft AI Act regarding high-risk systems in the internal relationship. On the one hand this is to document that the provider ensures that the AI solution complies with title III, chapter 2 and, on the other hand, this ensures that the provider does not carry any financial risk for (consequential) breaches of duty beyond its control.

In addition, the users of AI may also have obligations under the draft legislation. It, therefore, may be a precaution to i nclude this liability structure in the contractual agreements with the user of the AI system. The provider is not responsible for the components to be provided by the manufacturer in relation to the end user; however, it must be carefully examined to what extent risk stratification is even possible. This applies in particular if the end users are consumers and not, as in this example, entrepreneurs.

It is probably even more important to clearly and transparently specify towards end users that any breaches of the obligations for users of a high-risk system as prescribed in Article 29 of the Draft AI-Act will not be at the expense of the provider or company.

When things get serious: The AI Liability Directive and the updated version of the Product Liability Directive

Other legislative proposals that will need to be considered when drafting contracts in connection with the licensing and distribution of AI systems are the proposed AI Liability Directive and the amendment to the Product Liability Directive. According to a representative survey conducted in 2020, uncertainty about liability issues in connection with AI systems is one of the three most important external obstacles to the operational use of AI systems in European companies. (We have already summarised everything you need to know about the new (AI) liability rules in detail here in German.)

The current proposal for a directive contains two key elements. Firstly, injured persons shall not be required to prove the causality link between the fault of the defendant (failure to comply with duties of care when developing/using the AI system) and the damaging output produced by the AI system. In addition, in the event of damage caused by high-risk AI systems, access to evidence in the possession of companies or AI providers is to be facilitated within the meaning of the AI Act. Provided that the claim made can be assessed as "plausible", a claimant will be granted a right to information about the functioning of the high-risk AI system, in order to enable the affected party to demonstrate the actual existence of this claim. 

However, the claimant must first prove that they have made every reasonable effort to obtain the requested information from the defendant/company. If the AI provider does not fulfil this obligation to provide information, which arises due to the plausibility of the claim, this leads to a shift in the burden of proof to the AI provider, who must subsequently prove that the damage incurred did not originate within its area of responsibility.

These special provisions, which are applicable in the event of a claim for damages, should already be considered when developing or commissioning AI solutions. A company that integrates an externally developed AI solution into its (software) products is subject to these provisions on the facilitation of evidence and information obligations in relation to its end customers. The draft AI Act, which stipulates that the AI must be able to create documentation and logs, already helps the provider here. However, particularly in the case of faulty high-risk AI that does not fulfil the requirements of the draft AI Act, it is not ensured that the provider will be able to meet the information requirements of the either the proposed product or AI liability directives  independently and without the assistance of the AI developer. 

Here, for example, agreements should be made with the effect that the AI developer provides support for all claims and, in case of doubt, is liable in case of doubt whether and to what extent the non-fulfilment caused by the developer results in a reversal of the burden of proof to the detriment of the provider. Provisions should, therefore, be included as to what information the provider can at least demand from the developer in a specific case and what information cannot technically be provided at all due to possible "black box situations". As some of this information may contain business secrets of the company developing the AI, resistance from AI developers in contract negotiations is to be expected.

In such cases, the client company will likely insist on the assumption of guarantees (meaning strict liability obligations) from the developer for support and assistance in responding to queries in accordance with the AI Liability Directive. AI companies are likely to be interested in exemptions from their duty to provide information and co-operation that are formulated as specifically as possible. However, it is in the common interest of both parties to find a compromise solution to create legal certainty in order to be able to counter claims for damages from potential claimants as effectively and sustainably as possible.

Where the money lies... rights to input and output

To date, EU legislative proposals have not yet included any provisions on the rights to the input and output of AI solutions. Current intellectual property law, in particular copyright law, also does not protect input and output comprehensively and in line with the interests of the parties. However, this is precisely where the value, especially commercial value, of many AI solutions lies. For this reason, it is essential to include contractual provisions on the rights to the results generated by the AI in contracts for the licensing or development of AI solutions. There are a number of questions to be considered. Who is responsible for the completeness and accuracy of the output? Who is responsible – in the relationship between the contracting parties – for the non-discriminatory nature of the output? Who may further utilise the output and how?

Equally relevant are the rights to use the data with which the AI is trained. The contractor that develops an AI solution for a company as a customer will regularly have a great interest in being able to use the (anonymised) data to train other AI solutions that it develops (as standard software). For the customer company, this raises the questions of whether the (personal) data and business secrets that it contributes to the development are sufficiently protected and whether it receives a sufficient financial share of the further use and commercialisation of this data.

In addition to the provisions on input and output, particular emphasis should also be placed on confidentiality clauses that explicitly cover the components that are not protected by copyright, such as the AI model, as well as the progress and results of a training phase.

Osborne Clarke comment 

By taking into account the relevant legislative measures, numerous obligations that play a role within the value chain among responsible stakeholders can already be identified and allocated appropriately. Orientation towards the obligations set out in the drafted regulations is strongly recommended here. However, it remains to be seen how the specific legal texts of the AI regulations will ultimately be designed and to what extent it will be possible to effectively deviate from the risk allocation provided for by the legislation. 

Amendments to contracts for long-term projects may become necessary, which can already be secured by opening clauses now. From the customer's perspective, it is also advisable to include a general guarantee of compliance with the relevant legal provisions in the contract (possibly stating these provisions). In addition, companies that integrate AI solutions into their products should identify copyright and confidentiality issues and accordingly address these in the contract and, if necessary, combine them with contractual penalties and indemnification obligations in the event of violations.
 

Share

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?