AI, Data and Transformative Technologies

AI literacy under the AI-Act – what companies need to consider now

Published on 5th Dec 2024

The EU Regulation (EU) 2024/1689 on artificial intelligence (also known as the AI-Act) came into force on 1 August 2024. Following an implementation phase, most of its provisions will apply from August 2026, although Chapters I and II of the Act will already be applicable from 2 February 2025. In addition to the ban on the prohibited practices listed in Art. 5 AI-Act, this also affects the AI literacy regulated in Art. 4.

For companies that work with AI systems, this means, among other things, that they must take measures now to train their staff in the use of AI systems. It is therefore essential to prepare for the upcoming innovations now.

AI photo

What obligations do companies have under Art. 4 of the AI-Act?

Art. 4 of the AI-Act stipulates that providers and deployers of AI systems must take measures to ensure, to their best extent, that their staff have a sufficient level of AI literacy; the same applies to other persons involved in the operation and use of AI systems on behalf of the company. Their technical knowledge, experience, education and training, the context in which the AI systems are used and the persons or groups of persons for whom the AI systems are to be used should be taken into account.

According to Art. 3 No. 56 of the AI-Act, the term "AI literacy" includes "skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause".

The addressees of Art. 4 of the AI-Act are both providers and deployers of AI systems. Providers can be, among others, companies as legal persons that develop or have developed an AI system or an AI model and place it on the market or put it into operation under their own name or trademark (cf. Art. 3 No. 3 of the AI-Act). Deployers include legal persons who use an AI system under their authority (see Art. 3 No. 4 of the AI-Act). 

Art. 4 of the AI-Act does not restrict the obligation to certain AI systems only; essentially, only purely personal non-professional use by natural persons is excluded from the scope of the AI Act (Section 2 (10) of the AI-Act). The provision therefore applies in practice to all AI systems developed or used by companies. 

Art. 4 of the AI-Act does not specify which measures companies should take to ensure AI literacy. The choice of suitable measures is therefore up to the companies themselves and will in many cases depend on the area in which the company operates and which specialised area the group of personnel and other persons falls under. Even if this does not specify "how" AI expertise can be acquired, the question of "whether" is clearly answered: The fact that the addressed companies must ensure AI literacy in a suitable manner and to the best of their ability is legally mandatory from the beginning of February 2025.

Sanctions still unclear

Violations of the requirements of the AI-Act are subject to sanctions in accordance with Art. 99 (1) of the Act, which are to be determined in detail by the Member States and notified to the EU Commission. However, it can be assumed that this will be done by the time the regulations become applicable. 

How are the requirements of Art. 4 of the AI-Act to be implemented in practice?

The open wording of Art. 4 of the AI-Act means that companies must decide for themselves which aspects of AI literacy are relevant for their staff and how to ensure that they have them. Depending on the area of activity and sector, there will be different requirements resulting from the company and job-specific requirements. In addition, there is the diversity of the AI systems used.

However, practice shows that there are a number of topics that are almost always associated with the use of AI systems, especially in the area of language-based models (so-called large-language models or LLM). These topics are therefore an essential part of the AI literacy required by law. This applies, for example, to a basic understanding and awareness of 

  • how AI systems work and the resulting opportunities and risks;
  • the limits of the permitted, required or at least tolerated use of AI systems in the company;
  • the handling of confidential information (business secrets) during input (prompting) as well as during the output and subsequent utilisation of the AI-generated results;
  • the handling of personal data (data protection) for input, output and utilisation;
  • third-party rights (copyrights, trademark and design rights, patents and utility models, etc.) when using AI systems;
  • protectability of the AI-generated output (copyrightability, patentability, etc.);
  • system-related weaknesses of AI systems resulting from the training data used (e.g. bias) or from the way the systems function (e.g. hallucinations).

If it is not just a question of using an existing AI system, but also of training it, there are further aspects to consider. Within companies, the introduction and operation of AI systems may also give rise to labour law issues.

What needs to be done?

In the more distant future, providers and deployers of AI systems should be able to use voluntary codes of conduct to implement AI competence measures. Recital 20 of the AI-Act stipulates that a "European Artificial Intelligence Board" should support the EU Commission in promoting AI literacy tools as well as public awareness and education on the benefits, risks, safeguards, rights and obligations related to the use of AI systems. To this end, voluntary codes of conduct are to be developed to promote the AI literacy of people involved in the development, operation and use of AI.

However, until this happens, companies need to take action themselves. Specialist training courses, workshops and presentations on the topic are ideal for this. In order to publicise the requirements for AI expertise within the company and make them legally binding, it is also advisable to draw up an acceptable use policy. This should contain binding rules for the staff on which AI systems may be used in the company for which purposes and what is permitted or prohibited when using them. This can cover not only legal, but also ethical and moral aspects or regulate best practices for practical use. 

As always in the regulatory area, companies should also not fail to document the measures they have taken to ensure the AI literacy of their staff. In this way, they can prove, if necessary, that they have fulfilled their obligation under Art. 4 of the AI-Act.

AI expertise of staff as an advantage for companies

Regardless of the legal obligations, however, it should be clear that existing AI expertise among employees is an advantage for any company. After all, this is the only way to seize the opportunities that lie in the spread of artificial intelligence. And only with a competent staff will companies be able to avoid the actual and legal risks involved. 

In short, the AI literacy required by Art. 4 of the AI-Act helps to minimise risks at an early stage, to promote innovation and to improve efficiency, regardless of any legal sanctions.

Share
Interested in hearing more from Osborne Clarke?

Register now for more insights, news and events from across Osborne Clarke

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?

Register now for more insights, news and events from across Osborne Clarke