Recognizing the Primacy of Artificial Intelligence in America: Biden’s Executive Order Sets a High Bar for Regulation and Innovation
Nov 10 2023
An audio summary of this article is available in the player below. Scroll to keep reading.
Listen and subscribe to Womble Perspectives wherever you get your podcasts.
On October 30, 2023, President Biden issued Executive Order 14110: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.1 It seeks to confirm America’s global leadership in AI and centralize the federal government’s efforts to simultaneously regulate and leverage AI.
The Executive Order (EO) is ambitious in its aspiration and dense in its details. It goes far beyond prior voluntary commitments made by leading technology companies and the Administration’s own previous executive orders2. EO 14110 serves as a domestic counterpart to other recent global policy initiatives such as the UK AI Summit and the EU AI Act3 – signaling that the White House intends for the United States to establish itself as the world leader in AI safety, security and innovation.
Here are the key takeaways:
The EO is over 100 pages in length. It directs agencies to take action now to ensure that its directives will be actualized rather than ignored. Specifically,
But there are three looming, unmentioned caveats:
The majority of the EO addresses mitigating national security or critical infrastructure risks presented by actual and potential AI foundation models.
The EO requires private actors who are either developing or intending to develop potential AI foundation models (and also “computing clusters”) to make significant, ongoing disclosures to the federal government.
The required disclosures include detailing
Less clear is how and whether such information – which by definition is highly sensitive and subject to exfiltration – can truly remain protected while in government custody.
The EO also mandates that agencies take steps to ensure that responsible AI development protects against bias and discrimination in hiring, healthcare, and housing, implements consumer finance protections, and guards against the unlawful collection and use of personal data. It offers a relatively glancing nod to the implications of AI’s disruptions of the nation’s workforce.
How these protections will be implemented, and when, is less clear. As one example, in contrast to the foundation model national security provisions, less than half of the EO directives relating to equity and discrimination have deadlines.
The EO makes clear that federal agencies should leverage the efficiencies of generative AI to serve the public interest. Agencies are to be catalysts, not bystanders, in their adoption of appropriate AI tools.
But agencies also have a corresponding obligation to use AI responsibly. For that reason one set of directives is aimed at ensuring that federal agencies themselves implement adequate measures to protect the private data the agencies have collected and, in general, take into account the same privacy and security considerations required of the private sector.
The EO identifies a range of privacy, security and ethical considerations directed to organizations using automated decision-making tools for consumer purposes, and mandates that agencies issue guidance or regulations addressing
Among a number of aspirational initiatives specified under the heading ‘Promoting innovation’, the EO calls upon the USPTO to publish guidance regarding: a) the use of AI in the inventive process (e.g., inventorship), and b) updated patent eligibility issues to address innovation in AI.
As to inventorship, to date both the courts and the USPTO have already signaled that AI cannot be an inventor (see Thaler v. Vidal, No. 2021-2347 (Fed. Cir. 2022); https://www.uspto.gov/sites/default/files/documents/16524350_22apr2020.pdf). The EO appears to require that the USPTO provide additional guidance as to whether AI or a natural human should take credit for an AI-related invention.
Regarding patent eligibility, currently there is no bar to patenting AI-based inventions. There is, however, a more general requirement that patents be directed at patent-eligible matter. In other words, you cannot patent an abstract idea or mathematical formula (i.e., algorithms) because these have been deemed to be patent ineligible. Most inventors do not attempt to patent AI models per se for that reason. This has been a topic of hot debate since at least 2016. But now the EO directs the USPTO to clarify patent eligibility in view of AI-based solutions. Similar to previous USPTO guidance in this arena, this new guidance may include examples where AI-based inventions are patent eligible and where they are not.
Language in the EO suggests the Administration may be leaning in a more patent-friendly direction. For example, the EO mentions “tackling novel intellectual property (IP) questions and other problems to protect inventors and creators” and promoting “a fair, open, and competitive ecosystem and marketplace for AI and related technologies so that small developers and entrepreneurs can continue to drive innovation” which tends to favor stronger patent rights for small inventors. Even if the USPTO takes a positive stance towards patent protection, the courts may diverge and rule differently – which they have done so far in strictly construing patent eligibility.
Regarding copyright, issues regarding using copyrightable works to train AI, and the scope of protection for works produced using AI (e.g., generative AI) will continue to hashed out in the courts. The EO’s directives are somewhat roundabout on this topic: It directs the USPTO to wait until the US Copyright Office weighs in with the Copyright Office’s forthcoming AI study. After the Copyright Office’s AI study is published, the USPTO will then consult with the Copyright Office and issue recommendations as to potential additional executive actions relating to copyright and AI.
Click here to learn more about our Artificial Intelligence and Machine Learning practice.
1 The OMB released implementation guidance on EO 14110 on November 1, 2023: https://www.whitehouse.gov/omb/briefing-room/2023/11/01/omb-releases-implementation-guidance-following-president-bidens-executive-order-on-artificial-intelligence/.
2 The Biden-Harris Administration secured voluntary commitments from leading AI companies to manage risks posed by AI according to the fact sheet issued on July 21, 2023 and Office of Science and Technology also published the Blueprint for an AI Bill of Rights, the AI Risk Management Framework in October 2022. The Biden-Harris Administration issued Executive Order 14091 of February 16, 2023 (Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government).
3 https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023; https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206 (the European Parliament adopted its negotiating position on the EU AI Act on June 14, 2023, https://www.europarl.europa.eu/news/en/press-room/20230609IPR96212/meps-ready-to-negotiate-first-ever-rules-for-safe-and-transparent-ai).
4 Although an executive order has the effect of law, a sitting President may revoke or modify it, Congress may invalidate it by legislation, and the courts can declare it unconstitutional.