On October 30, 2023, President Biden issued Executive Order 14110: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.1 It seeks to confirm America’s global leadership in AI and centralize the federal government’s efforts to simultaneously regulate and leverage AI.

The Executive Order (EO) is ambitious in its aspiration and dense in its details. It goes far beyond prior voluntary commitments made by leading technology companies and the Administration’s own previous executive orders2. EO 14110 serves as a domestic counterpart to other recent global policy initiatives such as the UK AI Summit and the EU AI Act3 – signaling that the White House intends for the United States to establish itself as the world leader in AI safety, security and innovation. 

Here are the key takeaways:

The EO is more than a mission statement. 

The EO is over 100 pages in length. It directs agencies to take action now to ensure that its directives will be actualized rather than ignored. Specifically,

  • It tasks almost every government agency to accomplish over 140 myriad requirements (appoint and empower chief AI officers, adopt AI governance policies, conduct studies, issue regulations, and more).
     
  • It acknowledges the need for agencies to accelerate onboarding the requisite AI talent to implement the EO directives.
     
  • The timetable is aggressive: almost 20% of these tasks are to be completed within 90 days; over 90% are to be completed within a year.  Most of these deadlines are imposed in the portion of the EO addressing national security risks of foundation models.
     
  • By invoking the broad enforcement tools under the Defense Production Act, the EO puts the private sector on notice that compliance with the EO is a matter of national security and defense preparedness.
     
  • Many agencies are directed to adapt existing regulatory frameworks to address AI related issues (e.g., SEC, DOE, NIST, Treasury).
     
  • The EO’s precise definitions of covered agencies and AI terminology are intended to create a sense of intentionality and urgency by eliminating at least some ambiguity on the front end.

But there are three looming, unmentioned caveats: 

  1. With the 2024 election in sight, the EO can be modified or rescinded with the stroke of a pen.4
     
  2. Only comprehensive legislation from Congress can make permanent the long term protections and innovation initiatives called for in the EO.
     
  3. The EO walks a fine line: It lays out a framework for a comprehensive national AI focus, but also acknowledges the dearth of talent and expertise at the agency level to undertake these tasks.

Ensuring future foundation models are used safely and securely is in the national interest. 

The majority of the EO addresses mitigating national security or critical infrastructure risks presented by actual and potential AI foundation models.

The EO requires private actors who are either developing or intending to develop potential AI foundation models (and also “computing clusters”) to make significant, ongoing disclosures to the federal government.

The required disclosures include detailing 

  • All physical and cybersecurity protections related to training of the models.
     
  • The model weights embedded in the model.
     
  • The methodology and results of all “red-team testing” used to find flaws and vulnerabilities in the model.

Less clear is how and whether such information – which by definition is highly sensitive and subject to exfiltration – can truly remain protected while in government custody. 

The path to societal equity in AI is less clear.

The EO also mandates that agencies take steps to ensure that responsible AI development protects against bias and discrimination in hiring, healthcare, and housing, implements consumer finance protections, and guards against the unlawful collection and use of personal data. It offers a relatively glancing nod to the implications of AI’s disruptions of the nation’s workforce.

How these protections will be implemented, and when, is less clear. As one example, in contrast to the foundation model national security provisions, less than half of the EO directives relating to equity and discrimination have deadlines. 

Agencies are required to ensure citizen data in their possession is protected – even while they are exploring the use and benefits of generative AI. 

The EO makes clear that federal agencies should leverage the efficiencies of generative AI to serve the public interest. Agencies are to be catalysts, not bystanders, in their adoption of appropriate AI tools.

But agencies also have a corresponding obligation to use AI responsibly. For that reason one set of directives is aimed at ensuring that federal agencies themselves implement adequate measures to protect the private data the agencies have collected and, in general, take into account the same privacy and security considerations required of the private sector.

The EO identifies a range of privacy, security and ethical considerations directed to organizations using automated decision-making tools for consumer purposes, and mandates that agencies issue guidance or regulations addressing 

  • privacy risks (the use of AI and big data sets makes it increasingly more difficult for data to be truly aggregated or de-identified and can result in unlawful collection of personal data);
     
  • security risks and potential vulnerabilities for consumers using AI products; and
     
  • ethical factors to prevent discriminatory outcomes or bias (whether intended or not), especially in the areas of health care, housing, employment, and consumer finance. 

Will the USPTO protect AI innovation?  

Among a number of aspirational initiatives specified under the heading ‘Promoting innovation’, the EO calls upon the USPTO to publish guidance regarding: a) the use of AI in the inventive process (e.g., inventorship), and b) updated patent eligibility issues to address innovation in AI. 

As to inventorship, to date both the courts and the USPTO have already signaled that AI cannot be an inventor (see Thaler v. Vidal, No. 2021-2347 (Fed. Cir. 2022); https://www.uspto.gov/sites/default/files/documents/16524350_22apr2020.pdf). The EO appears to require that the USPTO provide additional guidance as to whether AI or a natural human should take credit for an AI-related invention.

Regarding patent eligibility, currently there is no bar to patenting AI-based inventions. There is, however, a more general requirement that patents be directed at patent-eligible matter. In other words, you cannot patent an abstract idea or mathematical formula (i.e., algorithms) because these have been deemed to be patent ineligible. Most inventors do not attempt to patent AI models per se for that reason. This has been a topic of hot debate since at least 2016. But now the EO directs the USPTO to clarify patent eligibility in view of AI-based solutions. Similar to previous USPTO guidance in this arena, this new guidance may include examples where AI-based inventions are patent eligible and where they are not.

Language in the EO suggests the Administration may be leaning in a more patent-friendly direction. For example, the EO mentions “tackling novel intellectual property (IP) questions and other problems to protect inventors and creators” and promoting “a fair, open, and competitive ecosystem and marketplace for AI and related technologies so that small developers and entrepreneurs can continue to drive innovation” which tends to favor stronger patent rights for small inventors. Even if the USPTO takes a positive stance towards patent protection, the courts may diverge and rule differently – which they have done so far in strictly construing patent eligibility. 

Regarding copyright, issues regarding using copyrightable works to train AI, and the scope of protection for works produced using AI (e.g., generative AI) will continue to hashed out in the courts. The EO’s directives are somewhat roundabout on this topic: It directs the USPTO to wait until the US Copyright Office weighs in with the Copyright Office’s forthcoming AI study. After the Copyright Office’s AI study is published, the USPTO will then consult with the Copyright Office and issue recommendations as to potential additional executive actions relating to copyright and AI.

Click here to learn more about our Artificial Intelligence and Machine Learning practice.


1 The OMB released implementation guidance on EO 14110 on November 1, 2023: https://www.whitehouse.gov/omb/briefing-room/2023/11/01/omb-releases-implementation-guidance-following-president-bidens-executive-order-on-artificial-intelligence/.
2 The Biden-Harris Administration secured voluntary commitments from leading AI companies to manage risks posed by AI according to the fact sheet issued on July 21, 2023 and Office of Science and Technology also published the Blueprint for an AI Bill of Rights, the AI Risk Management Framework in October 2022.  The Biden-Harris Administration issued Executive Order 14091 of February 16, 2023 (Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government).
3 https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023; https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206 (the European Parliament adopted its negotiating position on the EU AI Act on June 14, 2023, https://www.europarl.europa.eu/news/en/press-room/20230609IPR96212/meps-ready-to-negotiate-first-ever-rules-for-safe-and-transparent-ai).  
4 Although an executive order has the effect of law, a sitting President may revoke or modify it, Congress may invalidate it by legislation, and the courts can declare it unconstitutional.