What should employers in Belgium consider when using AI in the workplace?
Published on 19th Apr 2024
Artificial intelligence has enormous potential from recruitment to termination, but the regulatory landscape is evolving fast
The impact of AI spans across various sectors, with notable implications observed in the employment relationship.
It is already helping companies in crucial decision-making processes, particularly in recruitment and HR functions like performance evaluations. This use holds the potential to significantly reduce managerial workload and, theoretically, enable companies to base HR decisions on unbiased data and criteria, leading to fairer outcomes.
Generative AI goes even further than basic AI systems. It is a type of artificial intelligence technology that empowers machines to create novel content, encompassing text, code, voice, images, videos and processes. Well-known examples are ChatGPT or Google Bard. We tend to call it "generative" if it is capable of generating new, realistic content from the training data.
As (generative) AI becomes increasingly ingrained in employment decision-making, concerns regarding algorithmic bias, liability, privacy and more have come to the forefront. Legal issues may arise in relation to the input or output data. Input data is the data used to train the AI system, as well as the prompts or user inputs. Output data are the answers provided by the AI system.
This article will delve into the use of AI in the workplace, encompassing the recruitment to the termination phase, exploring its potential, risks, strategies to mitigate them, and future legal developments.
Use by employers
Tasks ranging from screening job applicants to making decisions regarding hiring, retention, promotion and even termination now heavily rely on automated tools. While the prospect of leveraging technology to streamline decision-making is appealing, it brings risks and often necessitates compliance measures being undertaken by employers when incorporating AI into their HR operations.
- Use in the recruitment process
Increasingly, employers are turning to AI tools for recruitment. One of the primary impacts of AI on Belgium’s recruitment market is its potential to enhance efficiency in various recruitment processes. Traditional recruitment often involves manual sorting of resumes, screening candidates and scheduling interviews, which can be time-consuming. AI-powered tools can automate these tasks, allowing recruiters to focus more on strategic decision-making and building relationships with candidates.
In Belgium, unlike in certain other EU countries, workers already benefit from strong protection concerning the implementation of new technologies by the company. Specifically, two collective bargaining agreements offer control and safeguards for workers.
Collective bargaining agreement n° 9 obliges the employer to provide information to the works council, regarding the personnel policy rules practiced and all projects and measures that could modify one or more elements of the personnel policy. This may include, for example, communicating information regarding a new generative AI tool used for recruitment or selection purposes.
Furthermore, Collective Bargaining Agreement n° 39 requires the employer to inform and consult the worker representatives whenever the company is considering implementing "a new technology with significant collective consequences regarding employment, organization of work, or working conditions".
As a result, employers in Belgium have distinct obligations regarding the introduction of new technology in the workplace, ensuring workers are notified and, in some cases, their representatives are consulted.
Nonetheless, once these technologies are legally integrated into the workplace, additional risks may emerge, relating to either input or output data.
Data protection and privacy
One of the primary legal challenges when adopting AI in the recruitment phase revolves around data privacy and data protection. This risk pertains to the input data. AI systems often rely on vast amounts of data to generate insights, predictions, or creative content. Employers must ensure compliance with data protection regulations, such as the General Data Protection Regulation (GDPR) in the European Union.
Under the GDPR, employers must identify an appropriate legal basis before collecting and processing candidates' personal data. Additionally, transparency in data processing practices is crucial. Employers must clearly communicate the purpose, nature and scope of AI-driven data processing to candidates, ensuring that their rights are respected.
Also, in accordance with the data minimisation principle, the employer should restrict the collection of personal information to only what is directly relevant and necessary to achieve a specified purpose. However, limiting the data solely to what is required for "training the AI system" is unlikely to be sufficiently specific.
Bias and discrimination
Biased training data leads to biased output data. AI models are trained on historical data. If that data contains biases, the AI system may perpetuate and even exacerbate these biases.
These biases are of particular concern if they facilitate potential discrimination in the recruitment process. Employers must be vigilant in addressing bias in AI algorithms to ensure fair and equitable treatment of all candidates. From a legal perspective, employers may face discrimination claims if AI-driven decisions disproportionately affect certain groups.
- Worker surveillance and performance management
In Belgium, many trade unions have been pushing for years for protections against some of the more intrusive ways that AI tools track and manage workers. As previously mentioned, the national CBA n° 39 provides some protection. Notwithstanding this, various risks may emerge when using the implemented tools unreasonably.
Source of additional stress
Despite the numerous benefits AI tools may have on the working conditions of workers, continuous control through AI tools can introduce additional stress for staff.
Growing concerns about job security, which prompt workers to work longer hours out of fear of not meeting employer-set targets, and demotivation due to a constant sense of being under surveillance are issues that should be recognised.
Additionally, the importance of positive social interactions in the workplace, which enable workers to cope with difficult or uncertain situations, thus potentially reducing stress, anxiety and burnout, should not be underestimated.
Hallucinations leading to discrimination
The continuous monitoring, facilitated by AI-based tools, also carries the risk of erroneous outcomes leading to discrimination, unjustified pay reductions or terminations. Additionally, employers should be aware of the so-called black-box effect, which refers to AI systems with internal workings that are invisible to the user of the system. While the employer can input data and see the output data, they may be unable to examine the system’s code or the logic that generated the output. Consequently, the employee (and even the employer) may have little to no explanation for the decisions made by the AI tool.
In a case brought before the Bologna court by a local union (Tribunal de Bologne 31/12/2020, R.G. n° 2949/2019), a platform app employed an AI system that assessed the reliability of couriers based on their behaviour and then assigned more tasks to couriers with higher reliability scores. The court found that this treatment did not consider the reasons behind the cancellation of the scheduled shift, which could include illness, emergencies, or the exercise of a legitimate right to collective action. Consequently, the court concluded that the inability to account for such reasons unfairly penalised couriers with legitimate reasons for not working and thus constituted indirect discrimination.
Employers should exercise caution when employing generative AI tools to monitor and manage worker performance.
Unreasonable dismissal
Also, when used to measure worker performance, AI systems can produce reports that not only enable employers to reward good performance but also make decisions regarding the termination of the employment relationship based on, for example, the analysis of workers' activities in emails, documents and other dashboards.
This raises questions regarding the decision of an employer to terminate an employment contract based on AI tools. If termination is decided, based solely on underperformance determined by AI outcomes, this may lead to a manifestly unreasonable dismissal prohibited by the Collective Bargaining Agreement n° 109.
Use by staff
Alongside employers using AI tools, workers may also incorporate AI tools into their work tasks. When generative AI is used within the boundary of its capabilities, it can improve workers' performance. It also enables staff to outsource tasks to AI to free up time, increasing their productivity and allowing them devote more of their time, energy and cognitive resources to work that will drive the company business forward.
However, despite the enhanced productivity, employers face tangible risks (relating to both input and output data) and potential liability stemming from the misuse of the technology by their workers or from its unintended consequences.
- Liability and accountability
When AI systems make decisions that impact workers, customers, or other stakeholders, questions of liability arise. If an AI system makes a decision that leads to harm, who is responsible?
Employers should consider the legal implications of AI-driven decision-making and establish accountability frameworks. Currently, if someone suffers damage by an AI tool, that person must prove a wrongful action or omission by the AI tool. However, the specific characteristics of AI, including complexity, autonomy and opacity, make it difficult and expensive for victims to identify the liable person and prove the requirements for a successful liability claim.
However, an upcoming legislative change may increase the likelihood of employers being held liable. In 2022, the European Commission released a proposal for an AI Liability Directive. The directive would create a rebuttable "presumption of causality", to ease the burden of proof for victims to establish damage caused by an AI system. It regulates the employer - who is using or operating AI systems - as the correct defendant if someone suffers damage due to a fault in the AI system.
- IP issues
Generative AI systems have the capability to create original content, thus raising questions about intellectual property rights.
When considering the use of generative AI tools in the company, it is essential to determine whether it will be used by staff for internal purposes only or whether the company will also apply it to relations with its customers. AI-generated content that is used internally is likely to be significantly less risky than content that is used externally.
Input risks may arise when workers use IP rights as input into a generative AI system, which may retain the data and even utilise it to build outputs, resulting in IP leakage.
Output risks may arise where the worker does not own or have the necessary permission to use the content created by the generative AI tool. In those circumstances, if the worker uses this new content externally, third parties with relevant IP rights in the content and data could claim that its extraction, reproduction and use infringes those rights.
As a result, employers must clarify ownership and rights associated with AI-generated work. Legal frameworks need to be established to define whether the employer, the generative AI developer, or the worker owns the intellectual property rights to the output generated by the generative AI system.
This requires careful drafting of contracts and policies to explicitly address ownership issues related to AI-generated content. The input and output risks must be assessed, taking into account the applicable contractual framework for licensing the generative AI tools as there are varying degrees of protections and warranties.
- Confidentiality and trade secrets
By using public generative AI tools, workers may inadvertently expose employer data, whether by analysing company information or composing emails using colleagues' names. Once captured, the information input into those applications sometimes cannot be deleted by the worker, and then may be used by the application, or even reviewed by the provider behind the generative AI application.
Unlike patents or copyrights, trade secrets rely on confidentiality and are not formally registered. If a worker inputs a company's trade secret into a generative AI tool, that trade secret could be at risk of losing its trade secret protection.
In addition to a company's standard policies aimed at safeguarding its trade secrets, the company could consider obtaining a licence, such as an enterprise model, for the generative AI system provider that places restrictions on what the AI provider can do with prompts or other inputs to the system.
Reducing risk
Employers who wish to prepare for an AI-driven future with limited risks should remain aware of how well their workers master the use of generative AI tools. This can be done by drafting the appropriate policies and training workers on the use of generative AI tools.
AI policies
Companies should draft effective AI policies with comprehensive information for workers regarding the permitted use of AI in the workplace, as well as its appropriate applications and constraints.
The policies should be articulated clearly to ensure that workers are fully informed about what is permitted. Some examples of essential components of effectives policies may include:
- Listing the tasks for which (generative) AI tools can be used
- Listing permitted (generative) AI tools
- Appointing a contact person or department for employees who have questions or concerns about the proper use of (generative) AI tools within the company
- Requiring human (worker) review of AI-generated work
In order to implement these policies, the employer will need to consult its social partners. This is done in accordance with the usual cascade system (that is, the works council, or in its absence, the trade union delegation, then the Committee for prevention and protection at work, or, in the absence of all of these, the employees directly).
Training
Having guidelines is one thing, thoughtfully implementing them is another.
Raising awareness among workers about the risks associated with AI is crucial. Workers need to know how to use appropriate queries, identify bias, avoid sharing company information and trade secrets publicly, and check for plagiarism to avoid IP right infringements.
Future developments: The AI Act
On 13 March 2024, the European Parliament formally adopted the EU Artificial Intelligence Act (AI Act). The AI Act represents the first legal framework on artificial intelligence which addresses its risks .
The definition of AI is broad, providing the necessary flexibility to adapt to the rapid technological advancements: "An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment."
To what extent are employers in Belgium affected by the AI Act?
The AI Act is likely to have an impact on the development, marketing or use of all types of AI (traditional and generative) by companies in Belgium. It proposes a graduated scale of rules based on risk: the higher the perceived risk, the stricter the rules. Some AI systems may even be prohibited.
Although the vast majority of obligations outlined in the AI Act apply to providers of high-risk AI systems (that is, individuals or organisations developing an AI system to be placed on the market or putting it into service under its own name or brand), the AI Act also sets out a number of obligations regarding the use of high-risk AI systems in the context of the employment relationship. As a result, it will certainly have direct relevance for Belgian employers using such AI systems in the workplace.
Companies that develop their own AI systems for internal use, may, in some circumstances, be considered as providers within the meaning of the AI Act, and thus be subject to the strict obligations for providers under it.
Employers should also be mindful that they might be considered providers under the AI Act if they use AI systems for a purpose different from its original intent. For instance, if an employer initially uses a generative AI tool for enhancing communication effectiveness (the programme's initial objective), but later decides to use it for monitoring the activities or effectiveness of workers, the employer will fall under the definition of a provider.
What will be the impact on Belgian employers using (generative) AI?
The AI Act specifically considers AI systems used in the employment relationship, workforce management, and access to self-employment (including systems used for recruitment and selection, decision-making on promotion and dismissal, task allocation, and monitoring or evaluation of individuals in contractual professional relationships) as high-risk AI systems.
Thus, Belgian employers using such AI tools will be subject to strict obligations. These obligations may include, for example, taking measures to ensure a sufficient level of AI knowledge among their staff, and other individuals who are responsible for the operation and use of AI systems on the employer's behalf. These obligations take into account their technical knowledge, experience, education and training, as well as the context in which the AI systems are to be used.
Whenever the staff or employer use high-risk AI systems in order to make or assist in making decisions concerning individuals, those individuals must be informed about its use.
Furthermore, Belgian companies must be aware that the marketing, deployment or use of AI systems intended to assess the emotional state of individuals in workplace-related situations are prohibited by the AI Act.
This includes AI systems that identify or deduce emotions or intentions of individuals based on their biometric data. The AI Act specifies that this prohibition is motivated by the fact that the scientific basis of AI systems aimed at identifying or deducing emotions raises serious concerns, especially considering that the expression of emotions varies significantly across cultures and situations, and even within the same individual. Consequently, such AI systems may lead to discriminatory outcomes and are prohibited.
Osborne Clarke comment
The use of AI in the workplace has the potential to streamline recruitment processes, enhance decision-making, and improve worker productivity. However, it also brings risks such as data privacy concerns, bias and discrimination, and issues related to worker surveillance and performance management.
Employers should establish clear AI policies, provide training to workers, and comply with relevant laws and regulations. The upcoming AI Liability Directive and the EU Artificial Intelligence Act will have a significant impact on the use of AI, introducing obligations and restrictions for employers. Overall, responsible and ethical use of AI is crucial to mitigate risks and ensure fair outcomes in the workplace.