Businesses increasingly look to AI tools to improve efficiency, increase productivity, and find new revenue streams. However, care must also be taken as AI adoption comes with a liability risk, should anyone come to harm as a result of the technology, with tech leaders expressing concerns over the rapid development of AI. Adopting AI technologies without appropriate due diligence could lead to an onslaught of liability claims, especially if proposed legislation comes into effect in the EU.

Who is responsible for AI mistakes?

The EU's AI Liability Directive (if enacted) aims to modernise the current EU liability framework to make it easier for individuals to bring claims for harms caused by AI. This aligns with the European Commission's pledge to adopt a 'human-centric' approach to AI. The AI Liability Directive will apply to all providers and / or users of AI technologies that are either available or operating within the EU.

Can artificial intelligence be held liable?

The AI Liability Directive (if enacted) would make it easier for anyone injured by AI-related products or services to bring civil liability claims against AI developers and the organisation utilising the AI. Consequently, developers and organisations designing and utilising AI in EU Member States will be at greater risk of being found to be liable under the new AI Liability Directive.

For example, in certain circumstances under the AI Liability Directive, the courts will apply a presumption of causality. This means that the starting point for the judgment is an assumption that the action or output of the AI was caused by the AI developer or user against whom the claim is filed. This presumption will be rebuttable, and defendants can submit evidence to the contrary, but this will require organisations document how they are using AI technologies, including steps that have been taken to protect individuals from harm.

The EU has taken this step because national liability rules are deemed to be ill-equipped. It is the EU's hope this will boost consumer confidence in AI enabled products, enhanced by the significant change to current legislation whereby the directive places the burden of evidence on the manufacturer, rather than the consumer as is currently the case in many states.

AI liability in the UK

The UK government has published the White Paper on AI regulation, calling it a 'pro innovation approach'. The paper recognises 'the need to consider which actors should be responsible and liable for complying with the principles' set out in the paper. However, the paper goes on to say that it is 'too soon to make decisions about liability as it is a complex, rapidly evolving issue'. Therefore, no guidance is provided on the position the UK will take on determining liability for harms caused by AI at this stage.

The ICO has responded to this White Paper, confirming it supports the government's innovation-friendly approach. The ICO notes that it has been left for regulators to produce guidance and advice on where responsibility and liability will fall. The ICO therefore encourages the government to work through regulators to deliver its ambitions where possible. The ICO goes on to state that clarification on the respective roles of government and regulators on issuing guidance would be welcome.

WBD maintains an AI Roadmap which you can use to help navigate the regulation of AI and the legal position.

Guidance for businesses on AI liability

  • Conduct a review to document the likelihood and nature of potential harm related to the AI which will make the following protective steps easier to undertake.
  • Provide disclaimers and warnings about the limitations and consequences of improper use of AI to contractual parties and third parties.
  • Budget for and obtain insurance for AI-related claims in tort as well as in contract law.
  • Review and redraft standard contracts and client/third party notices to exclude claims for losses flowing from negligence (except for injury or death which is not generally permitted in the UK).
  • Exclude or cap all forms of liability wherever legally possible. WBD's commercial team can advise how to draft contracts that are more likely to withstand court scrutiny.
  • Expressly exclude a duty of care from contracts and terms of use wherever possible.
  • Document how you are using AI technologies within your organisation, including how decisions are made and what implications there may be on individuals.
  • Assess the range of potential misuse of the AI by clients, end users and third parties. Consider whether there are any practical steps that can be taken to guard against any such misuse – i.e., provide user manuals highlighting a narrow 'correct' use and the dangers of misuse and reliance on poor data.
  • Obtain updated legal advice relating to discrimination law and data protection law as these are areas from which harm and loss are likely to flow.
  • Train staff and contractors to use the AI and its output in line with strict policy. Only allow competent personnel to operate AI.

Find out more about AI on Womble Bond Dickinson’s dedicated re:connect hub.