Artificial intelligence

AI is reshaping HR and the workplace internationally and raising new legal issues

Published on 20th Jun 2024

Now is the time for upskilling workforces for AI and for organisations to benefit from the 'commoditisation' of work

Green code on smartphone and laptop screens

Artificial intelligence (AI) offers many benefits to organisations, from taking away low-hanging administrative tasks to allowing employees to be repositioned to add greater value. And, within HR, AI can offers a wide range of solutions from scheduling or carrying out recruitment to performance assessments and monitoring employee wellbeing.

But the knowledge and opinions about AI's vary widely despite its growth in the workplace and on the HR agenda. While some organisations are already engaging with AI-driven solutions, such as ChaptGPT and "sandboxes", others are trying to find ways to use the technology appropriately.

Understand the risks

AI brings a range of risks that HR professionals need to understand. It can reinforce discrimination in biased data. It can pose issues around intellectual property, such as the use of other people’s material to develop an AI solution without their consent. It can potentially deskill workforces. And AI can raise issues around who creates and owns complications arising from its use.

There is ongoing government-level legislative and regulatory activity, but this presently involves more discussion than implementation. Organisations wanting to take advantage of the evolving technology will need to be ready to adapt and be flexible when and as rules are introduced.

Education and experience

Employers will encounter a range of issues as they adopt AI. Speaking at a recent Osborne Clarke AI event Sue Turner, founder director of AI Governance, a UK business consultancy, emphasised the need for executive education for leaders of organisations to understand what AI is, what it can do and, particularly, the ethical issues it raises.

When AI is introduced into an organisation, its leaders need to understand systems or approaches to mitigate risks in order to get the full value from the technology. Employers are exploring the introduction of frameworks to govern the use and development of AI. Although this can be at the cost of innovation and efficiencies, employers are looking to find the right policies to encourage the adoption and use of AI without exposing themselves to greater risks.

There has been a proliferation of AI "claimed to be" solutions touted to employers and, in particular, HR, according to Turner. The growing uptake of AI means that third-party suppliers need to be challenged over what their AI does and what data has been built on. "You need to know enough about what AI is doing to ask if this right for your organisation" she said. "You should ask: is the science behind it real?”

Is it or isn't it AI?

Organisations need to ask whether AI is actually present in solutions that are on offer. The definition of AI compared to machine learning, for example, can be hazy and lead to users believing that they have an AI-based system when they do not.

Conversely, some solutions can be implemented without realising that AI is present: often the full explanation of what a solution is and does stops when a sale has been made. This can lead to incomplete discussions of "what’s under the bonnet" and businesses not realising they are using AI-powered solutions.

"The more we ask questions about how it works and what it does, the more the suppliers will get used to being transparent and explaining what they do," according to Turner. "And that will be better for everybody.”

Introducing systems

Being impressed by AI’s potential is one thing but making it work is another. While generative AI solutions for everyday tasks, such as writing emails and taking notes, can be used and assessed with ease, embedded AI that is used to deliver automation as well as decision-making (such as recruitment and resource management) can be problematic.

An organisation that opts into a solution without the full knowledge of how it work is open to accusations of bias if it does not operate fairly. It can be difficult to see if the technology is "working". There may be a long-run impact on the bottom line or recruitment, retention and equality measures but it is not enough simply to trust the technology to produce the right results.

New systems can be run in parallel with established ones to offer benchmarks. Businesses taking this approach can avoid experimenting and attain a real sense of what a system does and generate a plan of what a technology can contribute.

The people factor

Technology will deliver results only if people engage with its use. Agile workforces can maximise AI's benefits. With an increasingly aging population and workplaces that might span five generations, finding the right approach to connect workers with technology can be a challenge. Hence, learning agility is more important than ever. As people work longer, organisations need to ask how they bring in and develop sought-after skills.

Technology considerations are vital when systems are customer facing: a system that looks good but doesn’t work or deliver value can be detrimental and lead to reputation damage. For HR, this would apply to recruitment, such as prospective candidates who have a bad experience sharing bad reviews. By using tech at the front end of business, an expectation is set for what customers or clients should expect.

'Silos' versus 'lakes'

Organisations can struggle with data management. Data can stay within the operational silos in which it is created – unseen and inaccessible to other parts of the organisation. AI can push businesses forward by tapping into diverse data sets and identifying patterns and potentially profitable actions. This can move data from "silos" to "lakes" from which AI could draw – but this requires security and protections around the data.

Data protection laws need to be observed. Organisations need to be aware of both the value of data and where it can and cannot go. Pushing data into an AI solution can mean it leaves the organisation and becomes accessible potentially to others. This requires frameworks and guidelines.

'Early years' work

AI technology is likely to take over the "low-hanging fruit" of administrative work and have a significant impact on entry-level work and the early career structure of professions.

The removal of low- and entry-level work could lock out some from career paths and loss of the opportunity to learn a job "from the bottom" could mean a more limited funnel of talent to come into professions.

However, the removal of mundane aspects of work could enable early-years workers to engage more fully with the job and its sector and to pick up more complex skills earlier, progress faster and make more significant contributions to the business.

Osborne Clarke comment

Attitudes towards the rapid evolution and adoption of AI can vary from the enthusiastic to the sceptical and even fearful. But there is a general recognition at senior levels that the clock can't be put back, transformation is coming and this requires figuring out what AI does and how to get on board.

Schools are already taking the view that they are preparing children for jobs that currently do not exist, as AI radically changes industries and workplaces and creates new roles and dispenses with others. This will need to be managed and overseen by the HR function and requires engagement to understand, adapt to and prepare for transformation.

HR will have a major role in this use and management of AI and to ensure skills and attitudes are matched by its safe and effective use. HR will have a role in creating and managing the new jobs and career paths for employees, with new expectations to manage and support. Guidance will also be required by managers and leaders in this technology-driven environment.

This is not, however, a task solely for HR. AI will touch every part of an organisation, Coherent strategies will be needed around AI's introduction and use. From system design and implementation to use and assessment, HR will need to work with all parts of an organisation to make AI work for everyone.

That will require teaching people how to use the technology and giving them frameworks –safe ways of working that say "we will use AI for this, but we will never use it for that".

This Insight is based on a roundtable for leading HR professionals, "AI in the Workplace – the Potential, the Pitfalls and the Risks", which was hosted by The HR World, sponsored by international legal practice Osborne Clarke and led by Anna Elliott, Partner at Osborne Clarke and Sue Turner OBE, the founder of AI Governance.

Share

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?