In what has been a strikingly busy time for AI headlines around the world, on Friday 30 September, Tesla unveiled its new prototype robot "Optimus" in the US. While closer to home on Wednesday 5 October, the European Commission unveiled a draft Directive dealing with AI liability and compensation, following hot on the heels (from a regulatory perspective at least) from the publication of the draft EU AI Act last summer which set up a number of different categories of AI (from use cases which should be prohibited, to those which are high risk and which will be heavily regulated, then everything else).

In unveiling "Optimus", Elon Musk referred to it as a "fundamental transformation of civilisation as we know it" as part of a venture which he has claimed could one day be "more significant than the vehicle business". Musk has also in the past referred to AI as a "fundamental risk to the existence of human civilisation" and, while saying that the technology should be developed with safety in mind, has also called for an AI regulator to police developments in this area.

While perhaps no stranger to hyperbole, these claims bear a striking resemblance to Stuart Russell's warning in 'Human Compatible – AI and the Problem of Control' (published in 2019 and widely regarded as one of the most important books on AI over the past few years – and also a very good read by the way) that success in the field of super intelligent AI "would be the biggest event in human history….and perhaps also the last event in human history". It also follows a whole host of other warnings and recommendations from scientists and academics involved in AI research and development around the need for regulation.

Meanwhile, back in the UK, the UK Government published (on 18 July) its own proposals on the future regulation of AI (taking a notably less centralised approach than the EU) with a view to publishing a White paper at some point this Autumn in which it will table more detailed proposals for consultation.

Then the Secretary of State for Digital, Culture, Media and Sport (Michelle Donelan), announced at the Conservative party conference a plan to replace the GDPR – part of a patchwork of existing regulations which cover the use of AI (such as protections around being subject to autonomous decision making and access to information around how decisions have been made) and which has, to date, been referenced in the context of the UK taking a different path from the EU. 

So different, in fact, that the UK Government is proposing a principles-based approach without central regulation and which would then need to be interpreted by the regulators with oversight of different sectors (such as the Financial Conduct Authority, Information Commissioner's Office, Competition and Markets Authority and Ofcom) - who themselves then cooperate under the aegis of the UK's Digital Regulation Cooperation Forum.

Crossing back over the Atlantic, in contrast, the Biden government shows signs of taking a similar regulation based approach in the US to that being taken in the EU. See commentary from our colleagues in WBD US for more here.

So what does this mean in relation to the use and regulation of a technology which has become such a part of everyday life without seemingly so much as a second glance by its users as to how spectacularly smart it already is? 

For those of us working in technology for decades, there are echoes of the slightly chaotic birth of cloud computing and the lack of interoperability between technologies (which of course to some extent still exists despite the prevalence of the use of published APIs to feed information between systems), not to mention the regulations involved. So this seems like an opportune moment to recap how the land lies ahead of the further regulatory developments expected both here in the UK and in the EU this Autumn. 

See the high level summary below which aims to illustrate the different approaches in play:

 

UK

EU

US

Regulatory approach

Policy Paper published 18 July 2022 "Establishing a pro-innovation approach to regulating AI"

Artificial Intelligence Act (originally proposed in April 2021)

AI Liability Directive (Adopted 28 September 2022)

Revised Product Liability Directive (Adopted 28 September 2022)

A range of more gradual developments has been gathering pace across multiple US bodies including initiatives by:

  • the White House Office of Science and Technology Policy (which is leading the development of an AI Bill of Rights)
  • the Federal Trade Commission (which has made it clear that it views AI data misuse as falling within the scope of its mandate);
  • the Food and Drug Administration and the Department of Transportation (which are both continuing to work to incorporate AI into their regulatory regimes);
  • federal financial services regulators (which have collectively launched investigations into the use of AI and its impact on core regulatory principles)
  • the National Institute for Standards and Technology (which is in the process of developing an AI Risk Management Framework).

Key principles

UK Government is not intending to define what AI is, instead proposing a number of core principles to require developers and users to:

  • Ensure that AI is used safely
  • Ensure that AI is technically secure and functions as designed
  • Make sure that AI is appropriately transparent and explainable
  • Consider fairness
  • Identify a legal person to be responsible for AI
  • Clarify routes to redress or contestability
  • To be interpreted and guidance to be issued by regulators.

Each of these to be built upon through guidance by relevant sector regulators

AI Act

The first regime of its kind and, like the GDPR, could become a global precedent

In contrast to the UK Government's approach, does seek to define what AI is

Regulates the use of AI in the hands of creators, resellers as well as users imposing different rules on the following different categories of AI:

  • AI systems creating unacceptable risk are banned
  • those involving high risk (e.g. use in critical infrastructure, education, product safety, employment rights, law enforcement and the administration of justice, migration and border control) will be subject to strict obligations before they can be put on the market
  •  those involving limited risk are subject to specific transparency requirements
  • those involving minimal or low risk can be freely used

AI Liability Directive

  • Designed to work hand-in-hand with the AI Act, is intended to put in place a framework addressing responsibility (and the availability of compensation) for harm caused by AI
  • Seeks to introduce a clearer and less problematic approach to the above, as compared to the challenges faced when seeking to address liability for harm caused by AI through the lens of more traditional legal concepts (e.g. the existence of a duty of care, causation, whether vicarious liability is possible with AI etc.)

Revised Product Liability Directive

  • Intended to bring the directive up to speed with the digital age
  • Contains the concept of strict liability (as compared to the AI Liability Directive which requires proof that the defendant has breached the requirements of the AI Act).
  • Taken collectively, the list of policy interventions and current direction of travel is starting to bring the US regulatory approach closer into alignment with that of the EU.

Next steps

Fuller white paper expected in Autumn 2022 – although query whether these timescales will be impacted by the current turbulence and change of leadership in UK Government

Potential replacement of the UK GDPR also mooted by the Secretary of State for Digital, Culture, Media and Sport at the Conservative party conference in October 2022

AI Act expected to become law in the EU over the course of 2023/2024

AI Liability Directive adopted by the EC on 28 September 2022 and intended to work alongside the AI Act when it comes into force

AI Bill of Rights expected to continue to be subject to development and consultation in 2022/2023 alongside the broader developments referred to above

Due to the amount of time which may be involved in establishing the EU's new framework (if previous regimes are used as a benchmark), some commentators have observed that the US may still find itself leading the way in practical areas of AI regulation

With first mover advantage being seen to be important as far as shaping future regulations is concerned (whether you prefer to call that the Brussels or California effect) - not to mention the global nature of the technology being used as well as those developing it - the EU and US seem to be converging towards a regulation based approach with a view to setting the standards in this space. Whereas in contrast, the UK Government - led by a policy objective of seeking to retain flexibility in a proportionate way – seems to be heading down a more fluid path based on overarching principles which can then be interpreted on a case-by-case basis.

It's impossible to tell which approach is right and whether either will be able to address the biggest questions around the potential impact which AI could ultimately have on us all. What is clear, however, is that we will need to keep track of how each part of the jigsaw is emerging as it's impossible to think that technology of this kind can be constrained by geographic boundaries. 

It's also not hard to envisage that if defined regulation emerges at an EU and US level, a typical compliance approach might be to take the higher gold standard as the rules, then work back from there (much as seems to have been the case with the EU GDPR and the way in which it has been used as a starting point for many other jurisdictions as well as businesses). In practice, if those more prescriptive rules then also satisfy whatever principles based approach emerges in the UK, businesses might find it easier to work to the EU standards instead.

Whatever happens, in practice those developing AI technology, as well as those using and relying on it (meaning pretty much all of us), will for the time being need to find a way of meeting a patchwork of different emerging regulations until a more settled regulatory approach emerges. That is of course unless the emergence of super human AI takes that out of our hands first.

More focussed updates will follow, but in the meantime, please contact any member of the Digital team here at Womble Bond Dickinson if you would like to know more about the impact of AI regulation on your business.

This article is for general information only and reflects the position at the date of publication. It does not constitute legal advice.