
In the age of machine learning and artificial intelligence (AI), employees are being encouraged, and even incentivised, to make use of AI to enhance efficiency, boost productivity, and drive innovation within their roles.
However, not all personnel will necessarily understand the risks associated with using AI – for example, when it comes to using or sharing sensitive and secret business information.
Alongside policies and training for employees on how to deal with confidential information and trade secrets, companies need to give careful consideration to how business-critical information may be used by any third parties with whom they share such information under confidentiality or non-disclosure agreements.
If you enter into a confidentiality agreement with another business:
- Would you be happy for them to feed your confidential information into an AI system?
- Does your confidentiality agreement provide the safeguards you need in the age of AI?
Challenges posed by AI
AI systems have the capability to process vast amounts of data to generate insights and predictions.
However, using confidential information in an AI system may create an additional risk of leakage. When confidential information is used to train (or simply put into) AI systems – especially those accessible to the public or "open" – the information could be exposed to unauthorised parties.
Some AI systems are "closed", being those that operate within a company's internal IT infrastructure, keeping data and models under strict control and preventing access to the information shared within that system. However, some more popular tools – such as ChatGPT for individuals – are examples of "open" AI, where the information users submit can then train the AI systems (unless they opt out), risking that information then being made available to users of the AI system in the future.
It seems obvious that confidential information shouldn't be put into open AI systems, but not all employees will necessarily understand the difference, or indeed make a conscious choice.
Employees using AI tools in their day-to-day operations might intentionally use AI to help them interrogate confidential information which has been shared by a third party. However, they may also inadvertently feed confidential information into an AI assistant – for example, if they ask that assistant to create a summary of emails that landed in their inbox during their annual leave.
Even where an AI system is closed, and such information might not be readily available to the public, given the way that certain AI systems can "learn" from their input and continually improve, it may be difficult to comply with contractual requirements covering third-party confidential information – such as return or destruction if discussions end – if the information has been put into the recipient's AI system.
The potential for such easy re-use of valuable information highlights the importance of dealing with AI when negotiating confidentiality terms with a recipient.
Approaching your agreements
Generally, well-drafted agreements will stipulate what information is to be treated as confidential, what duties the recipient has in terms of protecting the confidentiality of that information, for what purposes the information may be used, how long the confidentiality obligations will last, and what needs to happen to the information if the need for sharing or use has concluded.
Confidentiality agreements may be one-way (where confidential information flows only in one direction, from discloser to recipient), or mutual (where each party is sharing and receiving confidential information). One-way agreements are "easier" in that each party can push for its optimal position depending on whether it is receiving or disclosing information; mutual agreements tend to be trickier to negotiate, though it can result in a '"fairer" agreement where both parties have compromised to ensure they have acceptable protection but can comply with all the terms.
One-way pro-discloser agreements – what confidential information will your business disclose?
You need to think about what confidential information your business will share under the confidentiality agreement and the rationale for the sharing. The type of confidential information will likely impact whether you need to include specific language in your confidentiality agreement about use of AI.
If the confidentiality agreement covers information which is not easy to categorise (eg both low and high value, product designs and compilations of data, technical and commercial information), having restrictions on use of AI may be prudent in some circumstances – but may not be practical if you are engaging with the third party because of their data-processing (or other AI-driven) capabilities.
Templates may provide a useful starting point, but should always be tailored to the specific circumstances which require the confidentiality agreement.
One-way pro-recipient agreements – what confidential information will your business receive and what processes does your business have in place to protect it?
Whereas disclosing parties may be tempted to include a blanket prohibition on the use of AI systems, if your business is the recipient of confidential information, how would you enforce that prohibition throughout your business – and would it be commercially viable or effective (based on your business' usual mode of operation) to agree such a ban?
If you deliver your products or services to a customer making significant use of AI systems, or can deliver at a certain price or within defined timescales because of AI-enabled efficiencies, you may need to think twice before agreeing to this type of restriction.
Where you can agree a restriction – how will you communicate this obligation to your employees (and other recipients) and make the obligation binding on them? If AI is embedded into your productivity and day-to-day activities, how will you monitor compliance and prevent inadvertent breaches? What should the consequences be in the event of a breach?
Mutual agreements – best of both worlds?
If you're agreeing mutual confidentiality obligations, careful consideration will need to be given to what (realistically) you will need permission to do with the information you receive, but also what protections you need to have in place for your own information.
As a reminder, mutual doesn't necessarily mean identical – and, in some instances, there may be rationale to push for asymmetric terms (even if this doesn't always seem "fair" to those involved).
Options for addressing AI use in your confidentiality agreement
Having understood the context, you can then assess whether you need to include one of the following options in your agreement:
- Option 1 – no AI-specific provisions needed – The typical terms of a confidentiality agreement should, depending on the exact wording, prevent confidential information being used in open AI systems because by their nature that would involve transfer of information, on a non-confidential basis, to the AI vendor and potentially to third parties. The difficulty will be with determining (probably after the fact once a dispute has arisen) the confidentiality level of the AI system that had been used. This could be difficult given the fast-evolving nature of the technology, some vendors being unclear how their systems work, and the wide variety of AI vendor terms offering different levels of confidentiality.
Also, although this approach might offer some legal protection, in practice the confidentiality of the information may have already been lost. It is likely to then be very difficult, perhaps impossible, to have the confidentiality information removed from the AI system.
Where the recipient processes confidential information in a closed AI system within their organisation, arguably the confidential information would remain confidential to the business – but recipients may be in breach if they cannot effectively comply with "return or destroy" obligations contained within the confidentiality agreement, or if "use" is only permitted for a specified period of time.
- Option 2 – use of AI permitted with restrictions – alternatively, to remove any doubt, your agreement could include a clause permitting the use of , subject to certain restrictions – for example:
- the confidential information does not train the AI System, and
- the output of the AI system is only accessible to persons authorised to access and use the confidential information.
The clause can also contain a number of optional restrictions and obligations to reflect your business needs, such as carving out certain categories of confidential information. You may also wish to impose record keeping obligations on your counterparty.
- Option 3 – blanket prohibition on use of AI – where the confidential information to be disclosed is especially sensitive or valuable, it will be beneficial to include a blanket prohibition on the use of the confidential information within AI systems. This will likely be preferable to Option 1 where the discloser is aware that the recipient regularly employs AI within its organisation and cannot take the risk that such information is embedded into a third party (open or closed) AI system.
For all of these options, your employees (and other recipients), need to know what they can and cannot do with confidential information and AI systems. It is worth noting in this context that the EU AI Act requires providers and deployers of AI systems to ensure a sufficient level of AI literacy among their workforces.
Guidance and training for users
If you choose to include options into your confidentiality agreements, it would be good practice to include guidance so the users can evaluate which option to select – particularly if your confidentiality agreements are not always drafted with legal input.
Users of confidentiality agreement templates will need to be able to carry out an evaluation exercise around the topics above – what confidential information will be disclosed, how will received information be used, and can a prohibition on use of AI be agreed practically by the business.
It may be that new processes or mechanisms are needed to monitor compliance with the obligations of the confidentiality agreement.
Lawyers regulated by the Solicitors Regulatory Authority (SRA) must also be aware of the SRA's warning notice on the use of non-disclosure agreements. The warning notice tells those dealing with confidentiality agreements what they must do to ensure they are compliant with their regulatory obligations. The warning notice states that confidentiality agreements must not be used routinely and that those using confidentiality agreements must not become over reliant on templates. Templates must be tailored to the specific case for which they are being used and adequate support and training is essential.
Do you want to get advice about confidentiality agreements or confidentiality agreement templates?
Our team is ready to assist if you want to update your existing confidentiality agreement templates or create new templates which address the use of AI. Simply reach out to Rose Smalley-Gordon or Andrew Jerrard.
Operationalising AI
Businesses across the UK are navigating how they integrate AI into their operations. We've conducted a landmark survey to gather insights from CEOs, CTOs, and business decision-makers – to understand how AI is being operationalised across industries.
Through a UK-wide survey and an expert focus panel, we've developed a report that explores the different approaches and maturity of AI adoption at a strategic and operational level, to understand the challenges and opportunities across sectors. From the CEOs shaping strategic direction and CTOs overseeing tech strategy and implementation, this report considers where priorities align, where they differ across roles and what challenges this could present as we face the fastest advancement in tech since the dot com boom.
Looking at the view from the top, 67% of CTOs across all sectors are using new, enhanced AI data-driven decision-making to create business opportunities or craft their competitive advantage.
Get more insights like this in our report: 'The view from the top: how business leaders are operationalising AI'.
This article is for general information only and reflects the position at the date of publication. It does not constitute legal advice.