The power of AI is creating new and evolving cyber risks. This article explores those risks, the legal landscape and consequences for organisations in the UK and US that fail to implement sufficient cyber protections, and the steps that legal departments should be taking to protect their organisations.

AI cyber risk – both an internal and external challenge

From an internal cyber defence perspective, AI as a technology is not substantially more risky than other new and novel IT systems. The internal risk stems from the rapid adoption of new technologies creating new attack avenues. AI is fuelling a boom in new AI companies and the integration of AI into existing platforms. The speed of this change means the new technology may be less well tested and, accordingly, comes with heightened risks of undetected or unpatched vulnerabilities. 

It also introduces change risk as new technology is onboarded. This often requires bringing new systems onto an existing IT estate or new connections between existing IT and new cloud services. The exploitation of vulnerabilities created by IT change is a recurring theme in cyber-attacks, and in subsequent penalties handed down by regulators. In the course of making changes, a patched vulnerability may regress to being unpatched, remote access might be temporarily allowed, or an unsecured workaround might be put in place to make the new AI system work as a short-term measure, and then in the urgency to implement the change, those missing patches, open permissions and workarounds go unfixed. Similarly, setting up new access permissions between systems may unintentionally allow attackers to move between systems and domains, increasing the scope of a breach and the scale of data improperly accessed or exfiltrated.

Outside an organisation, AI is being weaponised by threat actors to perpetrate more sophisticated attacks. AI can be used to automate attacks, allowing the work of one hacker to be replicated many times over the by AI. AI can also scan firewalls and networks to quickly identify vulnerabilities and useful data, and can also be used to launch multiple attacks in parallel, thereby maximising the damage caused by an otherwise unnoticed, latent vulnerability, giving targeted companies no chance to fix their defences. There have also been reports of AI-powered polymorphic malware that re-writes and disguises its own software code to avoid detection by anti-virus and endpoint detection tools.

Phishing emails can be customised by AI to sound remarkably realistic, getting rid of the tell-tale typos and strange words that used to give away a phishing email's true malicious purpose. Phishing emails can also be customised at scale so that hundreds or thousands of emails can be written to sound unique to the fake sender and victim recipient, increasing the chance of someone clicking the malicious link or attachment that lets the attacker in. Phishing emails have also evolved into vishing, where AI can deepfake a person’s voice or image to generate real-time conversations with someone over a phone or even by video call, with the aim of tricking that person into granting the attacker access to a system or re-directing a payment to a fraudster’s bank account.

There are early signs that hackers are becoming more sophisticated with their use of stolen data. Historically, stolen data was used to extort a victim into paying a ransom under threat of their data being published. The content of the data was less important than the volume, and attackers did not care much what was in the stolen data - so long as there was enough data, it was probably going to hold some sensitive information that the victim was willing to pay for to get back. AI can now be used to scan that data to extract information that the attacker can then use for more targeted scams – like personal blackmail of senior employees or to build up a profile of a person's communication style that can then be used in a phishing / vishing attack.

Finally, there are cyber threats to the secure functioning of AI itself. Website or database injection attacks have been a known security risk for decades – this is where an attacker asks a query of a website or database (often sat behind a website) but does so using code that tricks the website or database into returning sensitive information or allowing deeper access to the system. With AI, this morphs into malicious prompting, with the prompt designed to trick the AI into revealing sensitive information. Imagine an AI chatbot on a retail website that was designed to help a customer know when their purchase is going to be delivered. This might require the chatbot to look up that information in a back-end customer database, but a vulnerability in the AI might cause the bot to provide the threat actor with sensitive information about the customer, like payment or account details. Another concern is AI poisoning, where the threat actor injects malicious code or data into the AI which then poisons the output from the AI to make it provide deliberately false information or biased / offensive comments to users.

UK v US regulation and guidance

The UK National Cyber Security Centre (NCSC) has concluded that AI creates “a realistic possibility of critical systems becoming more vulnerable to advanced threat actors by 2027. Keeping pace with 'frontier AI' capabilities will almost certainly be critical to cyber resilience for the decade to come.” This is one of a series of similar statements made by the NCSC over the last few years about AI cyber risk. UK organisations are on notice of these risks and the NCSC has provided guidance on how to mitigate them.

There is no specific UK law that mandates organisations protect against AI cyber risks, but the growing body of guidance from the NCSC and other regulatory bodies means that no organisation can claim ignorance of them, and so they now fall squarely within the remit of more general cybersecurity obligations. The primary obligations in the UK are under the General Data Protection Regulation which requires appropriate technical and organisational measures to secure personal data, and the Network and Information Security Regulations 2018 (“NIS”) which requires critical infrastructure providers to be resilient to cyber-attacks – with the latter due to be revised and updated shortly into a much tougher version with more prescriptive requirements and larger penalties. Beyond this, there are a growing number of regulations that require minimum levels of cybersecurity in digital and other products (for example, see here for a quick guide to the PSTI Act), and most UK sector-level regulators now have some form of cybersecurity requirements within their rules. 

Senior managers and directors in the UK are also being held personally accountable for cyber failures. As an example, the new EU Network and Information Security Regulations II (the revised EU version of NIS discussed above) includes mandatory training for senior managers, a requirement that management approve a company's cybersecurity measures, and a power for regulators to remove directors from office if they fail in their cybersecurity duties – all points that might be replicated in the new UK Cyber Resilience Bill expected imminently. In a similar vein, in 2023 the Chief Information Officer at TSB Bank was personally fined for failing to adequately supervision the migration of an IT system that caused an extended outage at the bank – although not caused by a cyber-attack, the principle of personal accountability for IT failures in the UK financial sector has now been established.

The US, by contrast, does not have a generally applicable national law mandating that organisations protect against cyber risks, nor does it have a national law specifically regulating AI cyber risks. Instead, cybersecurity risks have largely been addressed through a mix of private litigation and sector-specific regulations and enforcement—and organisations should expect AI to follow a similar trajectory.

Private cybersecurity-related litigation in the US is often initiated after a company suffers a breach of consumer data, with either consumers or shareholders of a publicly traded company alleging that the company failed to adequately safeguard consumer data. Government regulators, on the other hand, have enacted many sector-specific cybersecurity regulations. Examples include the Securities and Exchange Commission’s (SEC) cybersecurity reporting regulations for publicly traded companies, the Federal Communication Commission’s regulations governing the telecom industry’s cybersecurity practices, and the Department of Defense’s amendments to the Defense Federal Acquisition Regulation Supplement, which now requires government contractors to comply with the requirements of the Cybersecurity Maturity Model Certification.

Within this sector-specific approach, US regulators have signalled a willingness to specifically target and hold liable individual company executives for their roles in cyber breaches. The Federal Trade Commission (FTC) led this charge starting in 2023 when it settled an investigation into a company and the company’s CEO after the company suffered a cybersecurity incident. According to the FTC, the CEO violated the law by failing to implement reasonable security practices. Later that same year, the SEC similarly targeted an individual executive by filing suit against a company’s Chief Information Security Officer (CISO) of a company after said company experienced a highly publicized cybersecurity incident. According to the SEC, the company’s CISO made false statements regarding the company’s cybersecurity practices.

While similar litigation and enforcement have yet to occur in the AI context, regulators and private litigants are likely to take a similar approach to cybersecurity incidents that involve AI. And while it is unlikely that a national, broadly applicable AI law will pass in the short-term, several states, including large jurisdictions like California, New York and Texas, have passed laws regulating AI development and deployment. These laws include the Colorado Artificial Intelligence Act, the Texas Responsible Artificial Intelligence Governance Act, and the newly enacted California Transparency in Frontier Artificial Intelligence Act. While these state laws take a comprehensive approach to regulating AI, they also require companies to take specific actions in the case of an incident. For example, the California Transparency in Frontier Artificial Intelligence Act requires that an AI developer report if its AI was used to commit a cyberattack. Ultimately, with more AI laws being passed at the state level, companies should expect increased enforcement activity by state regulators.

As the threat of cyber-attacks increases with the use of AI, so do the legal liabilities on both sides of the Atlantic. The UK approach is focused on national regulation, whilst the US approach is more piecemeal regulation, but with a greater threat of private litigation claims – both however lead to the same outcome: more legal risk for companies. In both jurisdictions the legal risk is expanding from the company itself into the boardroom, with growing personal liability risks for management teams who fail to combat cyber risks.

The role of in-house Counsel in managing cyber risk

  1. Identify cyber legal requirements: The first step for in-house Counsel is to analyse the patchwork of different cyber regulations and liabilities to work out which ones apply to their organisation. This will vary by country, sector and business activity – some will be omnipresent, such as protecting personal information, whereas others may only apply to certain products or services offered by the organisation.
  2. Educating internal teams about legal obligations: In-house Counsel need to ensure that IT departments and information security teams understand that their decisions have legal consequences. For most organisations there is a baseline level of cyber security that is legally required and putting in the money and headcount to deliver that baseline is not optional. Cyber security professionals face a constant squeeze between the cost of managing ever changing threats and budget challenges, and in-house Counsel have a role in explaining to finance managers that adequate resources must be made available for cyber defences as they are not just optional commercial decisions but often a mandatory legal requirement.
  3. Board level governance: Cyber laws and litigation risk increasingly places personal accountability on the C-suite, and it is the role of in-house Counsel to ensure the managing board's obligations are discharged. This will include ensuring that the board has adequate training on AI and cyber security, has in place a governance process where it spends sufficient time supervising AI and cyber security practices and, where required, making sure its decisions are fully documented so to show the board has acted properly. In higher risk industries, it is becoming increasingly common to have a separate Chief Information Security Officer who reports directly to the CEO or board, and even a non-executive board member with deep cyber security expertise.
  4. AI and Cyber policies: The prevalent use of AI and its easy of access means that employees have a number of ways of bringing AI onto an IT estate or using it in ways that are not known to their employer. The organisation should have an AI policy that makes clear when and how AI can be used, and in particular makes sure that AI technologies go through a cyber security review – which in turn should tie into an information security policy. Although these policies are likely to be primarily owned within the IT function, in-house Counsel should ensure they are in place, up-to-date, and address as any legal requirements. Ultimately, it will be the legal team having to defend these policies to a regulator or in court in the event of a cyber incident.
  5. Vendor contracts: Many AI solutions will be acquired from external vendors and often involve the AI processing happening in a third-party data centre. In-house Counsel need to ensure that any vendor contracts place rigorous obligations on vendors to ensure the security of their service, that they follow recognised information security standards (such as ISO27001 or NIST CFS), and provide prompt notification of any security risks or incidents. For fast-evolving AI systems, it may also be important that the vendor provides regular (at least annual) assurances that security measures are being implemented in practice as changes to the AI product could introduce new risks.
  6. Liability mitigation: Perfect cyber security is impossible, and successful attacks will happen to every organisation eventually. In-house Counsel need to ensure that a holistic view is taken of how the financial impact of a cyber-attack will be managed. This begins with quantifying the possible scale of legal liabilities that might arise, and then assessing how to mitigate that liability through a combination of vendor compensation, liability protections against customer claims, self-funding (absorbing losses) and insurance. Where insurance is required, in-house Counsel should make sure the scope and amount of cover is adequate, which may also extend to ensuring adequate D&O cover for senior management.

Our team

Womble Bond Dickinson's transatlantic team of Digital lawyers understands how to deliver Artificial Intelligence services across a global footprint. This article in one in a series comparing the US, UK and EU legal regimes around Artificial Intelligence – find them all on our AI hub here.

This article is for general information only and reflects the position at the date of publication. It does not constitute legal advice.