
Modern workplaces are increasingly adopting Artificial Intelligence (AI) tools for HR and employee management, with GenAI applications broadening workforce accessibility. “Smart” robots are now commonplace, collaborating with human workers in diverse environments. AI’s evolution, propelled by enhanced computing power and data availability, has accelerated post-COVID-19, with remote work normalising AI use. However, existing employment laws face challenges adapting to these technological advancements.
What is AI?
AI technology in the workplace involves machines or software that learn from data and tasks, adapting to improve task performance. It simulates human intelligence for tasks typically done by humans. AI consists of data (text, audio, images, video) and algorithms (code for specific tasks). Core AI functions include pattern recognition, informed judgements, practice optimisation, and future behaviour prediction. AI applications range from ChatGPT to virtual assistants and recognition systems.
Machine learning, a subset of AI, allows computers to learn from data trends and patterns without explicit programming, often requiring substantial structured “training data.” Computers learn to recognise patterns and build models applicable to new data. GenAI, a type of AI, creates new content reflecting training data characteristics without duplication, like text generation via large language models (LLMs) such as ChatGPT, which predict word sequences to generate sophisticated outputs rapidly.
Automated decision-making occurs when AI tools make decisions without human intervention, assuming the AI’s output is correct.
AI in the workplace and its challenges
When implementing AI-assisted tools in the workplace, it’s important to be aware of potential legal risks. Careful consideration is advised to ensure compliance and mitigate any legal challenges.
The use of AI in the workplace can obscure the factors and their significance in automated decision-making, leading to transparency and legal concerns. Overreliance on AI may also diminish the personal connection between employer and employee, as AI lacks the nuanced judgement of human managers. This could result in management challenges, such as employees hesitating to discuss performance issues or a breach of trust due to unexplained AI decisions.
With regards to the European Union AI position, the European Commission has introduced a new Regulation, the EU AI Act, to lay down a legal framework for the use of AI in the EU. It was formally adopted by the Council of the EU on 21 May 2024.
There is no specific legislation governing AI in the UK. However, the King's Speech on 17 July 2024 confirmed that the UK Government will strengthen AI regulation.
A variety of AI-assisted tools is available to aid employers in numerous workplace functions but raises risks that need to be carefully considered:
- Recruiting and hiring: AI aids in creating job descriptions, screening candidates and conducting interviews.
- Employee onboarding: Chatbots guide new employees through resources and onboarding processes.
- Performance management: AI allocates tasks, measures performance and aids in promotion decisions, while also analysing sales calls and productivity for optimisation. However, reliance on AI risks unfair decision-making, as managers may not grasp algorithm workings or data interpretation, leading to potential unfair dismissal claims. Human managers should retain final decision responsibility, ensuring transparency and fairness.
- Remote worker management: Data analytics and AI track remote workers, supporting remote and hybrid work arrangements. The use of AI tools to monitor employees may give rise to data processing issues. Employers should provide full information to workers about any monitoring.
- Career coaching: AI suggests positions, training and development opportunities based on career interests.
- Employee retention: AI predicts employee turnover and advises managers on retention strategies.
- Redundancy: AI tools can be used in redundancy situations to select employees for redundancy. A dismissal is unfair unless the employer shows a fair reason, acts reasonably in treating it as sufficient for dismissal and follows a fair procedure. Claims of unfair dismissal may arise if AI tools are used inappropriately, as seen in cases like the Estée Lauder settlement over redundancy selection via automated video .
- Automation and safety: AI-powered “smart” robots are increasingly common in workplaces, collaborating with human employees to automate repetitive tasks, boost efficiency and perform hazardous duties. Additionally, AI monitors video feeds to detect potential safety risks. Employees may raise whistleblowing claims related to an employer’s use of AI . Employees who believe that their employer's algorithmic decision-making tools are causing biased or discriminatory decisions may raise these concerns as protected disclosures. This underscores the importance of ensuring AI tools are reliable and transparent to avoid potential whistleblowing and discrimination claims.
- Gig economy: Algorithmic management and automated decision-making tools allocate work, assess performance and enforce disciplinary actions. The “Managed by Bots” report by Worker Info Exchange in December 2021[1][2]revealed that current laws fall short in safeguarding gig economy workers’ rights, underscoring the need for greater transparency and fairness in the use of these tools to enable workers to challenge decisions effectively.
Legal issues and HR
While the benefits of using AI are numerous, it can also raise several concerns for employers who may not be fully aware of its implications:
- Whistleblowing: For example, in the case of Roganavic v iPlato Healthcare Ltd, a claim was made concerning a GP app’s AI triage system. The claimant argued that the app could misdirect vulnerable patients due to inaccurate or dishonest AI interpretations. However, the employment tribunal ruled that the claimant’s disclosure was an opinion rather than a disclosure of information, and it lacked objective information. This underscores the importance of ensuring AI tools are reliable and transparent to avoid potential whistleblowing claims.
- Data protection[3]: Employers’ use of AI applications typically involves data processing, which must comply with the data protection principles outlined in the UK GDPR when handling personal data. Extra caution is required for special categories of personal data, such as racial or ethnic origin, religious beliefs, sexual orientation or health data, which employers are generally prohibited from processing unless specific exceptions are met. Profiling in the UK GDPR context refers to automated personal data processing to evaluate certain aspects of an individual, such as in e-recruiting. Solely automated decision-making involves decisions made without human input, like an aptitude test using algorithms. Employers can engage in these activities if they comply with data protection principles and have a lawful basis. Additional safeguards are required to protect individuals, including the right to human intervention and to contest decisions, especially when decisions have legal or similarly significant effects.
- Human rights: The use of AI in the workplace, particularly for monitoring employees, may raise privacy concerns under Article 8 of the European Convention on Human Rights. However, employers may justify such interference if it aligns with legal requirements and serves a legitimate aim, as per Article 8(2). This is especially pertinent for AI tools used in employee monitoring.
- Employment contracts: The use of AI in the workplace, particularly for tasks like automated shift allocation, can affect pay and working hours. A report by the Ada Lovelace Institute in July 2023[4] highlighted the impact of AI on such processes, suggesting that employers should consider referencing these AI-related procedures in the section 1 statement as per section 1(1) of the Employment Rights Act 1996.
- Employment and worker status: The common law tests for employment status may need to adapt to the increasing use of AI in directing or performing tasks, as it’s uncertain if current tests will align with the realities of an automated workforce. This consideration is important as the workplace evolves with AI integration[5]. AI’s role in the gig economy, particularly in cases like Uber BV v Aslam [2021] UKSC 5, demonstrates its influence on the development of the “worker” status test. Here, gig economy workers, managed by a digital platform’s algorithm, challenged their employment classification. Such cases underscore the evolving relationship between AI and employment law, as courts assess how traditional legal frameworks apply to algorithmically managed labour.
- Mental health and stress: The use of AI for workplace surveillance and line management has been linked to increased stress, anxiety, and diminished mental wellbeing among employees, as noted in a 2023 House of Commons Library research briefing[6]. This underscores the need for careful consideration of employee health in the deployment of AI technologies.
- Industrial action: The historic strike by Hollywood screenwriters and actors against GenAI in 2023, marking the industry’s first in 60 years, suggests a potential rise in industrial action aimed at curbing the rapid adoption of GenAI. This example reflects growing concerns over the impact of GenAI on traditional roles and industries[7].
Managing AI in the workplace
Employers introducing or using AI-enabled tools to make decisions and manage employees should conduct a risk assessment to identify and address potential concerns associated with each AI application[8] and determine if consultation with trade unions or staff associations is required, and consider workforce engagement due to the complexity of AI. It is important to be clear about AI tool usage, as transparency is crucial both in the UK for personal data processing and in the EU for all AI types.
Employers should provide candidates and employees with comprehensive information on profiling, automated decision-making and monitoring, ensure that profiling and decision-making align with data protection principles and inform individuals of their rights.
We suggest that performing data protection impact assessments to evaluate the necessity and proportionality of AI data processing is paramount, as well as watching for bias and discrimination risks in relation to protected characteristics. Employers can consider implementing alternative measures for those with protected characteristics, as the AI training data may reflect past discrimination or could have imbalanced training data.
It would be sensible for human managers to still be utilised for decisions, especially those leading to dismissal. If AI is being used to assist with decision making, be prepared to explain and justify AI-based decisions if challenged.
Education can be key here. Educating HR and managers on understanding algorithms and interpreting data, including data accuracy verification, will lead to a better understanding of the data and therefore a better outcome in decision-making.
Employers considering using GenAI for workplace tasks should:
- Decide the extent to which employees should use GenAI for their work functions.
- Train employees on GenAI’s use, addressing any restrictions or limitations. Recent surveys indicate a need for increased AI training across all employee levels, especially frontline workers.
- Establish clear policies governing GenAI use in the workplace.
- Ensure GenAI outputs undergo thorough review to prevent errors, copyright infringement and potential reputational damage.
- Be aware of legal risks such as employer liability for AI bias, inaccurate responses and the inadvertent disclosure of confidential information when using GenAI applications.
Remember, while AI can improve efficiency and output quality, it can also make mistakes. It’s essential to ensure that AI is used responsibly and ethically, with human oversight remaining a critical component.
[1] Managed by Bots Report | Worker Info Exchange
[2] Gig economy algorithmic management tools ‘unfair and opaque’ | Computer Weekly
[3] Data protection is governed by the retained EU law version of the General Data Protection Regulation (EU) 2016/679 (UK GDPR) and the Data Protection Act 2018
[4] Lawrence-Archer and Naik: Effective protection against AI harms (July 2023)
[5] House of Commons Library: Research briefing: Artificial intelligence and employment law (11 August 2023)
[6] House of Commons Library: Research briefing: Artificial intelligence and employment law (11 August 2023)
This article is for general information only and reflects the position at the date of publication. It does not constitute legal advice.