Researchers at the University of Surrey are challenging a key element in the law relating to patents, arguing that an artificial intelligence system (DABUS) ought to be recognized as the inventor of a food container designed for safer handling and transportation, and of a flashing lamp designed to mimic neural activity making it difficult for humans to ignore its warning. The argument immediately runs into a fundamental requirement for patent protection, for example under the UK's Patents Act 1977, that there must be a human inventor. Its proponents argue that the law is outdated and requires a fundamental rethink in view of the rapidly developing capabilities of artificial intelligence (AI). However, guardians of the law, including the European Patent Office, remain unconvinced by the core argument and concerned at the possibility of unforeseen and undesirable consequences flowing from such a radical change. From a broader legal perspective, the outcome of arguments about the nature of an "inventor" might also have far-reaching implications for ownership, entitlement to economic benefit and exposure to liability. Those arguing for recognition of AI as an inventor may find that it pays to be careful what you wish for.
Invention and originality
The University of Surrey's argument relies on the contention that DABUS created the food container and flashing lamp independently of the coders who provided the algorithms that drive the system. However, the extent to which DABUS actually created the innovations is open to question. An essential benefit of AI and machine learning (ML) systems is that they are capable of processing data in far greater volume and at far greater speeds than humans. In the context of a game such as chess or Go, this means that a machine might be able to run through the range of possible moves more rapidly than a human player, and also that the machine may be able to select a move that appears unorthodox or innovative. However, at root, the machine is applying a set of rules and may be drawing on large sets of training data to inform its moves. A move that appears innovative is, in fact, merely a logical outcome of those instructions and of the selection of training data. Through their Art.Ificial project, Rutgers University Art and Artificial Intelligence Lab sought to create original artworks through an AI system called a “generative adversarial network”. The network is made up of a “critic” algorithm that evaluates the work of a “generator” algorithm, programmed to create imagery. The “generator” algorithm begins by producing imagery at random. The “critic’s” negative feedback slowly nudges the generator to produce images that move closer and closer to the specifications set by its creators. The result, they asserted, is an AI that can do something long considered the sole province of human beings: create an original work of art.
Advertising agency (or "Applied computer vision company") GumGum sought to apply a Turing test to the output of the Art.Ificial project, concluding that the artwork amounted to highly impressive mimicry, rather than originality. Viewed in that way, AI in its current stage of development is an increasingly efficient tool rather than an independent creator.
If AI were to develop to a point If AI were to develop to a point at which it could legitimately be regarded as an inventor rather than as an inventor's tool, then other legal and societal issues would arise. For example, in April 2019 the European Commission published a set of guidelines for the ethical development of AI, identifying seven key requirements for trustworthy AI:
- Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-theloop, and human-in-command approaches
- Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented.
- Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimized access to data.
- Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.
- Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
- Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered.
- Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate an accessible redress should be ensured.
Arguably, an AI system capable of being recognized in its own right as an inventor would be incapable of meeting these ethical requirements. It would, at least, lack human agency and oversight and would also fail to provide accountability. This failure to meet ethical requirements would cast significant doubt on the merits of any policy decision to afford AI the status of inventor for patent purposes.
Ownership and economic benefit
While the DABUS debate is focused on the relatively narrow question of whether a non-human AI can be an inventor for patent purposes, it inevitably extends to broader issues of ownership, economic benefit and liability. An AI typically involves the combination and interaction of code from a variety of sources, including open source elements. The AI might operate in a way that would not have been contemplated or anticipated by the individual contributors of code used to build and drive the algorithms. Consequently, even if the AI's creation could be considered truly original, it might be extremely difficult for any individual contributor to assert a role as “joint inventor”. Under Patent Act 1977 s 7 an inventor is “the actual deviser of the invention”, and join inventor is construed accordingly.
On the current state of the law, this point leads to the essential difficulty that the DABUS debate seeks to overcome. If an invention were found to be a truly original creation of AI, with no human involvement, then it would be ineligible for patent protection. The input of any individual provider of code may be too minor or indirect to meet the "actual deviser" test and, if there is no human meeting the statutory test, then there can be no patent. That would potentially deprive investors of an essential economic protection.
Breaking through this legal impasse would require a fundamental decision at policy level. Recognizing AI as a potential inventor would support investment in AI systems on the grounds that the individual or corporate investors would then fall within the category of “persons” entitled to apply for patents in respect of its inventions. That would provide clear benefits to investors in circumstances where the AI had been developed wholly or largely “in-house”. There would be a strong and clear connection between investment, development and the output of the AI. There would also be little or no scope for external challenge from the contributors of code, claiming that their input was sufficient to meet the “actual deviser” test. The position would be far less clear, and potentially more contentious, where an AI relied upon code from numerous external sources – and perhaps even more difficult where a company simply bought in a developed and operational AI system from an external vendor. In those circumstances, identifying the AI as the inventor might lead to disputes over the identity and entitlement of “joint inventors”. It is quite foreseeable that the vendor of an AI system intended to be capable of producing patentable inventions would seek through contractual provisions to reserve entitlement either to inclusion as joint inventor, or to deferred payments calculated by reference to the economic value of any inventions subsequently generated.
As with most legal debates, questions of ownership and entitlement rapidly lead to questions of potential exposure to liability, whether through statutory product liability regimes, tort or contract. The patent debate feeds into a broader global discussion concerning the creation or recognition of a new form of “legal person”. In 2018 Malta enacted a suite of legislation designed to facilitate and regulate "innovative technology arrangements", which might include blockchain, distributed ledgers, smart contracts and AI. Building on that legislation, Malta is now actively considering the conferral of legal personality on such technology arrangements. The essential argument in favor of legal personality is that it would permit the development and deployment of systems that, once set in motion, would essentially run themselves within the parameters of their coding. Such systems might survive the death of individual owners or the dissolution of corporate owners, or the systems sale from its original owner to a successor. Consequently, a key perceived benefit of legal personality would be that liabilities would rest with the system itself, rather than with its current owner.
Another aspect of that argument is that if liability rests with the system itself, then its legal personality would act as a shield protecting those responsible for any element of its underlying code. This would, in turn, remove a potential deterrent to innovation. Individuals providing code would not be walking into potential liability. At the same time, however, that insulation from liability might exert a significant downward pressure on the price payable to the providers of code. Price and risk are inextricably linked, and if code provision were to be de-risked, then the price would presumably drop.
Malta's proposed scheme of legal personality also raises other possible difficulties. If liability were to be fixed solely upon the system, then it would be necessary to ensure that the system has the means to meet potential claims. One solution might be the “vending machine” model proposed in Malta. Under that model, the system would have either to carry insurance or build up a fund for liabilities from revenues that it generates. Given that insurance policies designed to cover such liabilities are likely to take time to develop and accurately to model risk profiles, the likelihood is that any such systems would have to rely on a fund for liabilities model. That, in turn, risks stifling or crippling growth. The essence of the"vending machine" model is that if the means to meet liability are not present, then the system would not be permitted to operate or to enter into any transactions – just as a vending machine is required to return coins if no chocolate bars are available. This is fundamentally at odds with normal start-up conditions, where potential liabilities might be covered by commercial lending, or even left as an uncovered risk.
Human in the loop
Whether viewed from the perspective of economic benefit or liability, attempts to treat AI as an independent entity quickly run into areas of legal difficulty. Even if it were to become possible as a matter of patent law, identifying an AI as the sole inventor risks external challenge from multiple providers of code. Meanwhile, recognizing providers of code as joint inventors potentially exposes them to liability, or at least to expensive and time-consuming proceedings to fend off claims, in relation to an invention generated by the AI. The radical change in law proposed through the DABUS debate would, as the European Patent Office fear, risk unintended consequences. It is, in any event, both premature in view of the current state of AI and unnecessary given that AI is likely to require continuing "human in the loop" involvement. Understood as an inventor's tool, rather than anthropomorphized as an inventor, AI does not (yet at least) require any such change.
This article first appeared in the Patent Lawyer.