Artificial intelligence

German supervisory authorities issue guidance for GDPR compliant use of AI

Published on 15th Dec 2023

Hamburg and Baden-Wurttemberg data protection authorities have drawn up practice-oriented guides for organisations.

Person using digital screen of a map

Prior to the political agreement reached in the trilogue negotiations on the EU's AI Act earlier this month, both the supervisory authority (SA) for data protection in Hamburg and in Baden-Wurttemberg published guidance in November relating to the General Data Protection Regulation (GDPR) and artificial intelligence (AI). Similar guidance has previously been published by the French data protection agency CNIL and the UK’s Information Commissioner’s Office (ICO).  

Both Hamburg's and Baden-Wurttemberg's guidance provide a more practice-oriented view on AI than the rather programmatic Hambach Declaration on AI, which the German Data Protection Conference published in 2019, or the Declaration on Ethics and Data Protection in AI adopted by the 40th International Conference of Data Protection and Privacy Commissioners in 2018.

Hamburg checklist

The Hamburg SA’s checklist for the use of LLM-based chatbots (Checklist) covers what is currently the most well-known AI use case: large language models (LLM) such as ChatGPT, Google Bard or Microsoft’s AI-enhanced search engine Bing.

As this form of AI has risen to prominence over the last year through ChatGPT, the use of AI in everyday business contexts has become an increasingly pressing data protection issue for companies that want to leverage the technology for their own practical purposes. The Hamburg SA's Checklist aims to give practical guidance on how to use LLM chatbots in the workplace in a GDPR-compliant manner.

Policies and procedures

The Checklist relies heavily on creating compliance through policies and procedures that should cover:

  • The use of LLM in the workplace only through company-owned accounts.
  • Additional authentication requirements.
  • Rules on when, under which conditions and for what purposes the LLM may be used by employees (in particular, no automated final decision).
  • No input of personally identifiable data if the AI provider processes the data for its own purposes, in particular for AI training purposes.
  • No creation of output containing personal data.
  • Disable the storage of the dialogue history and opting out of AI training.
  • Requirements to check the output in light of accuracy (hallucination) and prevention of discrimination.
  • Training to employees on how to use AI.
  • Protection of copyright or trade secrets.

Additional measures

In addition, the data protection officer should be involved in time, a data protection impact assessment may need to be carried out and the works council may need to be involved as well. Finally, any organisation using LLM tools should closely monitor legal developments around the use of AI. While this relates in particular to the EU's now finalised – but not yet officially adopted – EU AI Act, the Checklist also points to “precedent setting procedures with the aim to find out if the LLM on the market are lawful in general.

Baden-Wurttemberg's paper

The Baden-Wurttemberg SA’s discussion paper on the legal bases for the use of AI (Paper) aims to help controllers to familiarise themselves with the legal bases under the GDPR for the use of AI systems in general. Unlike the Hamburg SA’s Checklist, it is not limited to the application of LLM tools but covers AI as a whole. Although labelled as only a "discussion paper", the Paper is the first substantial guidance by a German SA that focuses on the legal bases for the use of AI under the GDPR in a practical context. Similarly detailed guidance, albeit with a different focus, was provided in 2019 by the German Data Protection Conference in its German language-only position paper on recommended technical and organizational measures for the development and operation of AI systems.

An ongoing discussion

The Baden-Wurttemberg SA wants the Paper to be understood as a “living document” that should map the state of an ongoing discussion – similar to the approach of the US government agency the National Institute of Standards and Technology in its Artificial Intelligence Risk Management Framework. To this end, it provides a link for participation in an online discussion after each chapter.

The legal bases

The Paper stresses the importance of understanding the AI life cycle and each phase of the life cycle should be understood as a processing phase.

As a first step, a controller must determine each processing phase of an AI system and must understand the detailed processing activities of each phase. For each phase, a legal basis under the GDPR (articles 6 and 9) must be identified.

The Paper identifies the following processing phases: the collection of training data for development of AI; processing of training data for development of AI; making available of the AI for use; the use of AI; the use of output generated by AI. For each of these phases one or more organisations may qualify as controllers; for example, the provider of the AI system and/or the organisation using the AI system.

The Paper discusses the pros and cons of the various legal bases under article 6 of the GDPR. It emphasises that the legal basis of article 6 (1) (f) – that is, legitimate interest – is only available to non-public entities and concludes that legitimate interest may be most suitable in the AI context but requires a case-by-case assessment and, therefore, bears a high level of uncertainty for organisations.

Where consent is the legal basis, in particular, if sensitive data is involved, it must be ensured that consent is voluntary, in particular due to potential lock-in-effects, nudging and deceptive design patterns.

Through the life cycle of an AI system regular personal data may be turned into sensitive personal data requiring a legal basis under article 9 of the GDPR. In conjunction with the first sentence of section 27 of the German Federal Data Protection Act, article 9 (2) (j) of the GDPR may justify the processing of sensitive data for the purposes of structuring and using data for machine learning with AI if it can be demonstrated that the processing serves the purposes of and is necessary for scientific research.

Osborne Clarke comment

The Paper provides an overview of the potentially applicable legal bases under the GDPR in the context of AI. However, the Paper is also limited on issues around the legal bases and explicitly does not address any other privacy law issues, such as transparency (article 13 and 14 of the GDPR), data subject rights (article 15 et seq. of the GDPR) or Data Protection Impact Assessments (article 35 of the GDPR). To what extent organisations may be able to rely on legitimate interest in the context of AI – that is, where the SA deems legitimate interest to be an appropriate legal basis – has not been analysed in any detail in the Paper. The same basically applies to all legal bases discussed in the Paper. It mainly contains rather abstract descriptions of requirements but no specific guidance on very common use cases.

Although the Hamburg Checklist is also lacking in detail, it can be used as a starting point for policies and procedures around the use of LLMs in the workplace. However, it will be challenging to fulfil the SA’s expectation of opting out of AI training by the provider when using the publicly available versions of the most common LLMs. Furthermore, given that certain steps in the Checklist may also be onerous, companies may want to consider a risk-based approach by deviating from certain of those requirements. In any case, the Checklist clearly emphasises the obligation to implement suitable measures and conduct various assessments before rolling out an LLM. Unfortunately, the Checklist does not provide any insight on more specific privacy law issues around AI, such as practical compliance with data subject rights, as the Checklist is based on the general idea of avoiding any processing of personal data through LLMs.

Both the Hamburg Checklist and the Baden-Wurttemberg Paper are more practice-oriented than the very abstract Hambach Declaration on AI of 2019, still they do not provide the type of guidance organisations are hoping for in terms of AI and GDPR. Guidance from the French CNIL’s and the UK’s ICO may still be more helpful, even for German organisations. Particularly the latter may also find the position paper on recommended technical and organizational measures for the development and operation of AI systems helpful.

With the AI Act now finalised, it remains to be seen how the German SAs’ guidance may need to be adjusted. Given that the Baden-Wurttemberg SA has stressed the nature of its Paper as a living document, we would expect that the next version of this Paper will also cover data protection implications of the AI Act.

Share

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?