AI in the employment lifecycle: what pro-innovation regulation means for employers
Published on 11th Oct 2022
The UK government's policy paper on AI governance and regulation has significant implications for the workplace
The Department for Digital, Culture, Media and Sport's recently published policy paper, "AI Governance and Regulation: Policy Statement", sets out the UK government's proposed pro-innovation approach to regulate artificial intelligence (AI) that will have significant implications on AI tools and their use by employers.
Automation by employers
Employers increasingly use AI tools to automate various aspects of the employment lifecycle thanks to the increased efficiency and, in turn, time and cost savings that they offer. Examples of how these tools are applied can be found in:
- Recruitment and onboarding, where they are used to automatically sort through CVs and application forms, analyse tone of voice and facial expressions during video interviews, and screen candidates using online tests.
- Employee reviews, where they can be used to determine whether people will hit targets through automatic categorisation of data and the finding of correlations between data sets. Performance appraisals more generally have been undertaken using AI algorithms that carry out automated reviews of completed work against a database of satisfactory work.
Discrimination and bias
Despite the benefits, these uses of AI come with risks. These include the potential use of biased data, which the Equality and Human Rights Commission covered in its recent guidance on the use of AI in public services. When bias skews the outputs of an AI tool in a way that it negatively impacts an individual or group of individuals with a legally protected characteristic, there is a risk of illegal discrimination. Numerous employers have backtracked on the implementation of AI tools upon uncovering evidence that they had been unintendedly facilitating this type of algorithmic discrimination.
To address this, the "decisions" of AI software and the data used should be checked to ensure that any potential bias is detected and in turn corrected. However, often this remedy is not as simple as it sounds given the "black box" nature of much AI decision-making. There will often be little human insight into why a particular decision has been generated, so checks have to be done not by reviewing the AI tool's methodology, but solely by reference to the output. It is important to regularly analyse the candidates who are successfully passing any AI-reliant stage and those who are not to ensure that discrimination can be remedied early that is not easily visible upon analysis of the initial algorithm or data. This might need to be done with selective sampling of decisions, rather than checking every decision, if the AI-driven efficiency is not to be lost.
Current UK regulation
There is no AI-specific legislation currently in place limiting the use of AI in the workplace. Data protection law does, however, provide some protection for employees subject to AI decisions in an employment context. Article 22 UK GDPR, in particular, contains provisions that protect individuals from solely automated decision-making and profiling that would produce a "significant affect" on their lives. This provision seeks to ensure that there is some meaningful human involvement in these processes. Under the previous UK government, the Taskforce on Innovation, Growth and Regulatory Reform recommended that Article 22 should be removed due to its "burdensome, costly and impractical" nature; it remains to be seen whether the new government will agree with that thinking.
Draft EU AI Regulation
The AI Regulation is unlikely to be finalised before late 2023 with a further delay for businesses to work on compliance. Under this proposal, AI systems used in employment are categorised as high-risk given their potential to impact appreciably the livelihoods and careers of individuals. Once in place, AI tools used in the EU will have to meet strict requirements outlined in the proposed regulation. These requirements are broad, spanning the categories of accuracy, cybersecurity, data quality and governance, human oversight, record-keeping, robustness, technical documentation and transparency. Employers using these tools in the EU or in relation to EU employees will want to be confident that the tools they deploy are compliant "out of the box" or that they understand any compliance requirements that will impact on them as users.
The proposed UK approach
The UK government's policy paper represents a divergence from the EU's approach. It envisages adopting a high-level set of principles to be implemented by individual regulators as needed, as opposed to implementing a standalone bespoke regulatory framework. The guiding principles for regulators to consider when overseeing the use of AI in their sector are to:
- Ensure safe application of AI.
- Ensure the technical security and reliable operation of AI.
- Ensure that AI is appropriately transparent and explainable.
- Embed fairness considerations into AI.
- Define an identifiable legal person's responsibility for AI governance.
- Clarify routes to redress or contest.
Initially, these principles are not expected to translate directly into new obligations. The government, instead, envisages encouraging regulators to consider lighter touch options, such as guidance or voluntary measures. Sector regulators are to adopt a proportionate and risk-based approach, applying the principles to AI systems operated within the area they oversee, focusing on high-risk concerns. Since there is no "employment regulator" as such, the application of the principles in employment matters may instead develop incrementally either through the rulings of employment tribunals or the guidance from employee rights bodies such as Acas. Equally, guidance from the sectoral regulators may impact on employees of businesses in those sectors.
This approach is more light touch than the prescriptive one proposed by the EU. That said, the government will develop its proposed approach as it works towards releasing its planned white paper on AI regulation towards the end of 2022 – and this will potentially have a broader impact beyond an employment context.
Osborne Clarke comment
While UK-based developers and suppliers may appreciate a lighter touch to regulation at home, many will likely also want to access the larger EU market. There is real potential that while the UK government may want to regulate in one way, the EU approach may ultimately become the baseline for most companies.
Employers using AI in their employment lifecycles should consider:
- Completing due diligence before implementing any AI tools, to fully understand any risks.
- Encouraging HR to be open with candidates about how such technologies are used and the review mechanisms in place, to reduce the risk of negative backlash.
- Reviewing their internal AI strategy in line with the proposed principles and decide on the steps required to align their use of AI tools with the emerging regulatory frameworks.
- Implementing some form of regular human review in relation to the results produced by AI tools, maintaining the "human" part of human resource functions.
- Remaining up to date on developments in the regulation of these technologies; in particular, keeping up to date with the upcoming UK AI white paper as well as any announcements from regulators about how they intend to interpret, implement and enforce the six principles.