Artificial intelligence

What impact will artificial intelligence have on the future of work internationally?

Published on 20th Oct 2023

Both employers and employees are increasingly using AI, but there are particular risks to manage in an international HR context

The workplace is experiencing a significant rise in the use of artificial intelligence (AI) by both employers and employees. To streamline operations and enhance decision-making processes, employers are deploying AI systems to complete tasks such as data analysis, automation and customer service. Likewise, employees are utilising AI (particularly generative AI and chatbots) to augment their capabilities, enabling them to work smarter and more effectively.

However, this growing reliance on AI presents new challenges – as these tools continue to shape the workplace, employers must navigate the complexities of integrating this technology (ensuring equitable and responsible use) while keeping pace with international regulatory developments.

We have produced a flyer "Artificial Intelligence and the Future of Work" with some high-level details of the overarching legal position in respect of AI use in the workplace in a number of jurisdictions (the UK, Belgium, China, France, Germany, India, Italy, the Netherlands, Poland, Singapore, Spain and Sweden). This Insight summarises some of its key takeaways.

How is AI being deployed in the workplace?

Most jurisdictions noted an increased use of AI tools to automate various aspects of the employment lifecycle; however, interestingly, adoption of AI in French and Indian workplaces is still low.

The prevalence of AI in the workplace, and its applications, vary depending on the sector in which an employer operates. Examples of how these tools are commonly applied internationally can be found in:

  • Recruitment and onboarding.
  • Performance management.
  • Work allocation systems (particularly for gig platforms).
  • Monitoring employee welfare.
  • Work and content production, through the use of generative AI and chatbots.

What are the common pitfalls of using AI in employment?

Despite the benefits, these uses of AI come with risks. There are two principal issues to consider from an international HR perspective:

  • Bias and discrimination: The use of AI within the employment context brings the risk of algorithmic bias and discrimination. Algorithmic bias can result from the use of non-representative data (for example, an insufficiently diverse and representative data set) to train AI systems. Where the use of biased training data sets skews the outputs of an AI tool in a way that negatively impacts an individual or group of individuals with a legally protected characteristic, there is a risk of illegal discrimination under existing equalities legislation in different jurisdictions.
  • Data protection: Machine learning systems depend to a large degree on data as part of their training process and also when executing day-to-day tasks. When that data includes personal data, then local data protection legislation needs to be complied with (for instance, the UK and EU GDPRs in the UK and Europe).

Across many European jurisdictions, there may also be works council information and consultation requirements to observe before implementing AI-based technologies.

Similarly, when AI adoption leads to the automation of certain tasks that were previously handled by employees, it might result in redundancies. Workforce restructuring is, of course, subject to local legal requirements, including process and consultation. Failure to meet these obligations risks employee disputes.

What are the main risks associated specifically with employee use of generative AI?

There are several risks associated with employees using generative AI to produce work and content –some of the main risks which apply, regardless of jurisdiction, are:

  • Potential for inaccuracy of or low-quality information produced by an AI chatbot resulting in reputational risk (and regulatory risk if used in a regulated industry).
  • Intellectual property (IP) and copyright infringement risks (in terms of IP leakage (employees inputting company IP) and ownership of AI outputs).
  • Confidentiality and data protection breaches (there is the possibility that employees will share confidential information when having conversations with an AI chatbot, which may then be accessible in the "back end" of the system).
  • Embedded IP or data protection breaches in training data that was scraped from the internet.
  • Loss of developmental opportunities or deskilling of employees.
  • Bias and discrimination risks (as noted above).

What steps can employers take to mitigate these risks?

Employers using AI in their employment lifecycles should consider (among other things):

  • Completing due diligence before implementing any AI tools, including on training data.
  • Checking terms and conditions of use to understand whether potential risks are mitigated by indemnities or warranties from the licensor.
  • Implementing some form of regular human review in relation to the results produced by AI tools, maintaining the “human” part of human resource functions.
  • Putting in place policies clearly setting out what the expectations are around AI use in connection with work.
  • Reviewing their internal AI strategy and deciding on the steps required to align their use of AI tools with the emerging regulatory frameworks.

Osborne Clarke comment

Employers should remain up to date on developments in the regulation of these technologies to enable them to respect local AI regulations in the jurisdictions in which they operate.

The details of the proposed EU AI Act are still under negotiation, but the expectation is that AI designed for recruitment and workforce management will be considered "high risk". An onerous framework of regulatory obligations is proposed, which will fall at various points in the AI supply chain, including on the users of these tools. Generative AI is also expected to be regulated. Please see our previous Insight for commentary on the status of the EU, UK and international AI regulatory proposals.

As an international law firm specialising in transformative technologies, Osborne Clarke can provide invaluable guidance and legal expertise to employers navigating the complexities of deploying AI in the workplace. With our deep understanding of local regulations and evolving AI-related legal frameworks, we can help employers ensure compliance and mitigate potential risks.

If you would like a copy of the international flyer or to discuss some of the specific challenges you face in this area, please contact one of our experts to find out more.

Share

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?