AI tools are changing the world of work, and HR professionals need to become familiar with their risks and benefits. We recently asked our contacts how they were using AI in a HR and workplace context.
Based on the responses we received, most of our contacts are well on the way with their AI in the workplace journey. Overall, AI is being used cautiously and is affecting some hiring decisions. It is also affecting job numbers in some businesses. However, our contacts are not yet considering significant redundancies due to technological change.
We are seeing a focus on AI policies and training, which are forming a key part of the AI in the workplace journey and help to ensure AI is implemented and deployed in the right way.
How is AI being used in the workforce?
The most common use cases for AI in a workplace context are drafting job descriptions and advertisements (58% of responses) and drafting HR policies (55% of responses).
These are both tasks where AI tools might produce a good starting point. However, because of the way AI works, there could be more risk than you might think and expert human oversight is still vital. For example, if the data AI tools were trained on contains HR policies with errors then use of AI could mean you end up with policies that are not legally compliant. There would also be a risk of having policies that are not wrong but don't work in practice for your business.
Similarly, the data AI tools were trained on is highly likely to include job advertisements with language that, say, appeals more to male candidates than female ones. Therefore, using AI-produced content uncritically could risk perpetuating bias and discrimination. The important point to remember is that AI is trained on huge volumes of data: some good and some bad. So you should not assume it gets everything right. Steps need to be taken to carefully consider these risks and what steps can be taken to ensure individuals concerned are treated fairly. This may involve additional human oversight, or an adjustment to the relevant algorithm involved to correct any bias or discrimination.
The next most common use of AI tools was transcription of disciplinary, grievance or consultation meetings (35% of responses). This is consistent with our experience from client conversations. Many of our clients have started using transcription not to replace the HR professional in a grievance meeting but to enable that person to concentrate on providing support for the manager, rather than their main focus being capturing as accurate a note as possible. This is a really good example of AI being used to supplement existing roles and make better use of the HR team's skills. Transcription tools are currently far from perfect and much work needs to be done to tidy up notes afterwards. However, they do allow for more accurate and comprehensive notes.
Another reasonably common use at 29% is utilising AI to answer routine HR queries from employees. That is consistent with our own experience, where we have introduced a tool called iWomble to help employees access information in our own internal policies quickly.
One slightly surprising result is that only 6% of respondents are currently using AI in workforce planning and shift management. This may be influenced by the industry in which those who responded work. However, over time we would anticipate that tools that help employers predict peak demand and rota employees when they need them could become important in industries such as retail, hospitality and manufacturing, as well as public services.
Finally, a small number of our contacts are using AI in what we would view as higher risk scenarios. These include analysing or summarising employee grievances (6%), assessing candidates in recruitment processes (16%) and monitoring employee performance/productivity (10%). These are all areas where the outputs from AI tools could have significant real world impacts for individual employees. Therefore, the risks of AI tools hallucinating (making things up) or importing bias or unintended discrimination from the data they were trained on is concerning. It will be vital to ensure that there is human oversight and you do not rely solely on AI outputs in these scenarios. Additionally, there are significant data protection considerations: you should consult your data protection officer and carry out data protection impact assessments before using these types of tools.
What about the impact on jobs?
Based on the responses we received, the impact on employee numbers is not yet significant. The content of some jobs may be changing due to AI. However, despite high profile stories about technology companies advertising with slogans such as "stop hiring humans" and companies like Salesforce cutting customer service roles in favour of AI agents, our contacts are not yet seeing significant impact on workforce numbers.
Six per cent of respondents told us that they had reduced hiring activities for some roles. However, no-one had knowingly stopped or paused hiring for specific roles because of AI implementation.
The impact on existing roles may be more significant. Six per cent of respondents told us they had needed to consider making redundancies because of the use of AI tools and a further 6% said they anticipated doing so over the next six months. What does the regulation of AI look like in our contacts' businesses?
The encouraging news from our survey is that 48% of respondents had a policy governing the use of AI and a further 32% were considering it. However, that leaves 10% who don't have a policy and 10% who aren't sure.
In our view having both a robust policy and training employees on AI is vital. Policies and training help you ensure that:
- Only vetted AI tools are used in your business
- AI tools are being used for the right tasks and not used in higher risk scenarios
- You get the best out of AI tools by ensuring your employees know the best way to use them
- Data protection risks are considered before tools are used
- action if employees put your business at risk by using unauthorised tools or using AI for unauthorised purposes
- Your ethical principles and values are taken into account when employees use AI.
Restricting which AI tools are used is particularly important. It can protect you from some of the risks of bias and, further, if you forbid employees from using opensource tools, it should help protect your data from being shared with the wider world. If you aren't clear on what tools employees can use then it will be difficult to take disciplinary action if their use of AI subsequently risks causing damage to your business.
If you have not yet adopted an AI policy or conducted training for your staff we recommend doing so.
If you have any queries about the issues raised in this article please get in touch with your usual WBD employment team contact.
This article is for general information only and reflects the position at the date of publication. It does not constitute legal advice.