On February 6, 2024, the USPTO weighed in with guidance for practitioners using—or considering using—AI in preparing submissions to the USPTO. In essence, Director Kathi Vidal has reminded practitioners that they must sign their submissions, that the signature constitutes a certification of its accuracy, and that the signers are therefore responsible for the accuracy of those submissions, including anything AI-generated. This guidance precedes the expected Federal Register guidance likely to be published at the end of February.
This clarification explained the USPTO’s position on the use of AI (including generative AI content) in legal proceedings and directed the Patent Trial and Appeal Board (PTAB) and Trademark Trial and Appeal Board (TTAB) to hold parties responsible for the misuse of AI in legal proceedings.
Director Vidal offered her comments in light of the discussion of AI by Chief Justice John Roberts in the Year-End Report on the Federal judiciary released on December 31, 2023. The inevitability of AI in USPTO practice also is bolstered by President Biden’s October 30th Executive Order (EO 14110). While the forthcoming Federal Register guidance will address “inventorship and the use of AI… in the inventive process, per EO 14110, the guidance this week was directed to the PTAB and TTAB, expecting staff to “apply their existing skills and relevant existing rules to the challenges the Chief Justice identified.”
The Chief Justice recognized the potential use for AI, as well as the potential pitfalls, referencing recent court filings generated via AI including “citations to non-existent cases” (referred to as “AI hallucinations”). Director Vidal noted that, under the current USPTO Rules of Professional Conduct, any submission to the USPTO under signature must be reviewed for accuracy by the person presenting that submission. For example, such review includes recognizing errors or omissions, as well as verifying factual and legal accuracy. As reiterated by Director Vidal, such clarifications apply not only to AI generated documents, but to any document submitted to the USPTO.
She wrote that “practitioners are also prohibited from asserting or controverting an issue in a proceeding unless there is a basis in law or fact for doing so,” another potential problem with the submission of unchecked AI-generated documents. “Simply assuming the accuracy of an AI tool,” writes Director Vidal, “is not a reasonable inquiry.”
“Simply assuming the accuracy of an AI tool is not a reasonable inquiry.”
Director Vidal further noted the potential consequences of including such inaccuracies or errors. For example, the submission could be struck or a practitioner could be precluded from submitting a paper. Additional and more severe consequences exist, such as terminating the proceedings in the Office and, potentially, criminal liability and disciplinary action.
So what does this mean and how could this affect you? These clarifications should not significantly impact current practice. It is good practice to always review, for any situation, rather than just for legal documents or submissions, any AI-generated document. Further, any document, whether generated by AI or not, prior to submission to the USPTO, should be thoroughly reviewed with adequate time and care, despite the time pressures practitioners typically face. Although AI, particularly generative AI, may be useful for quickly solving problems and/or generating drafts of documents, the contents of those documents should never be taken as the truth until reviewed for accuracy. Many examples have now been reported regarding “AI hallucinations” or, in other words, of AI fabricating facts, quotes, and cases.