The following is an excerpt taken from the final chapter of The Law of Artificial Intelligence and Smart Machines, published by the ABA Business Law Section. The book is edited, and the excerpt is written, by Ted Claypoole.  

As increasing numbers of people spend increasing time, energy and money inside the increasingly complex and interesting virtual realms, rules will need to evolve. People will spend nearly their entire economic lives in these worlds, and eventually their entire social lives. As the immersive virtual world links more deeply with people’s lives outside it, ordering food delivery and paying rent to resolve wet-world corporeal needs from accounts within the virtual world, the rules of society will need to change in recognition of this duel existence.  In many ways the immersive world will be just as real as the one hosting a person’s body – or the virtual world may be another place in the wet world. We can imagine a time where a bed-bound quadriplegic woman at the Mayo Clinic in Minnesota may spend most of her waking hours interacting with her family in Somalia through a robot that she controls – a robot that can speak and walk and dance for her. She will be immersed in her world in Somalia, though her body is never able to leave the physical protections offered in Minnesota.

What if this woman committed a murder in Somalia, directing her robot to poison an official who threatened her family? How would she be punished? Or, to pull the analogy back to its original form, what if she stole money inside an immersive world where she spent 18 hours a day, and the immersive world was created and operated from Belgium? Did she break Belgian law or Minnesota law, both or neither? Or did she simply violate the rules of the immersive game? Anyone who believes issues of this complexity cannot be raised by overgrown videogames should remember the serious banking crisis that developed in the Second Life immersive world in 2007-8[1]. Technology and the economy evolve, and the rules of our society evolve to meet new challenges.

Artificial Intelligence is another rule-changing and society-changing technology that currently exists, but is likely to mutate and grow in unknowable ways. Right now, governments[2] and companies[3] are pumping billions of dollars into development of machine learning technology while scores of new commercial applications for artificial intelligence increase each year. While a general intelligence program seems far off, general machine intelligence feels like an inevitability now, though it seemed simply a dream just five years ago. Progress in the field has been staggering, and once we have harnessed deep learning programs to develop and operate their successors, growth in the field is likely to be explosive.

There is no practical limit for the types of human tasks and problems that a computer intelligence can manage and resolve. If humans can do it, then a machine is likely able to do it too – and more. This chapter will analyze evolution in the law that may occur due to development of Artificial Intelligence into a general problem-solving program, especially where those programs are put to work alongside humans and help to manage day-to-day issues. For centuries, people have created legal fictions to reorganize complex economic projects and to recognize changing realities of technology and society. The Artificial Intelligence revolution will certainly bring its own modifications of existing law. This chapter will examine why people might want to grant legal rights to artificial beings, will review certain natural impediments to those legal recognitions, will study other models of rights lesser than those granted to adult citizens, before finally offering answers to the questions it raises.

1. Why would we grant legal status to an Artificial Intelligence Being?

At some point in the future people may be willing to award legal rights to human-created artificial intelligence program. Already one robot has been granted “citizenship” in Saudi Arabia. Although citizen status for Sophia is clearly a stunt,[4] especially given the fact that the Saudi Kingdom denies well over half of its resident population full rights of citizenship, this award plus a title granted to Sophia by the United Nations[5] demonstrates that people are already considering a day when artificial intelligence seems too human to ignore.

Under US law, “legal personhood” is not necessarily synonymous with being human.[6] Because the concept of artificial intelligence is broad, vague, poorly defined and ever changing, this chapter will discuss  an artificial entity called the “human produced perceptive intelligent individual” (“HPPII” or “HuPPII” for convenience of pronunciation). I choose this term because it describes an artificially intelligent being that may be worthy of legal recognition, without limiting the being to any particular physical form or assuming that the being will be electronically based and not chemically or biologically based. Further, I believe it is important to imply a singularity to the being.  Not a capital “S” singularity describing an achievement in artificial intelligence progression, but a unique intelligence that can somehow be held in a single unit, whether that unit is a physical form like a robot or a ship, or lacks physical form but is otherwise rendered effectively indivisible so that it operates as one entity rather than a collective or hive mind. It is also important that the individual has been directed by human structural concepts, and not naturally occurring without significant human intervention, and finally that the individual reacts to input from its environment with reasoning and intent.

Eventually, one or more HuPPIIs will approximate human behavior so closely that people will want to treat the HuPPII as one of their own. There is no reason that a HuPPII needs to act like a human, with apparent emotions, empathy, humor, and otherwise personable response to stimuli, but as we see from chatbots like the Echo or Sophia, a humanistic interface is already the goal of AI creators. Nearly three quarters of a century ago, the Turing Test set a standard for advancement in artificial intelligence, and that standard was human-like responses – preferably personality filled reactions so approximating those of natural humans that natural humans would not be able to differentiate between man and machine. Ever since, the imagining and creation of machines built to interact with people has been to mimic human interactions and make the machines “feel” human to the people interacting with them. As these attempts improve, the artificial intelligence interactions will become so indistinguishable from human behavior that people will begin to befriend their HuPPIIs and treat them as equals. In other words, logic be damned, our emotions will take over and we will intuitively feel that the wing man guiding us through our cross-country drive behaves like a person and should be treated like a person. Our great gift for perceiving our world through metaphor has always led us to personification of everything from animals to natural phenomena. This same inclination to personify important factors in our lives will express itself in the personification of HuPPII. The more HuPPII act like us, the more we will want to treat them like our distant relatives. In fact, we may come to appreciate and like our personalized HuPPIIs better than our own relatives, which will make it difficult to resist freeing them from slavery.[7]

Though we are driven by emotion more than we care to admit, people may choose to grant HuPPIIs legal rights for more logical and practical reasons. For example, as artificial brains teach themselves to create and as they act on their own in the human world, we may want to assign legal credit for their creations and legal blame for their harmful actions. What happens when a generalize HuPPII, created to help build cars or to guide high school students through difficult chemistry labs, after years of working properly on its assign tasks, begins to compose symphonies or draft intricate artistic patterns and fabric designs? We will want to assign the ownership of those creations to the most logical source, which may be the company or person responsible for developing the HuPPII’s computer code (but probably not), or the entity that owns the HuPPII and allowed this artificial intelligence to blossom into a creative force (but probably not). If it becomes clear as a matter of law and fact that the HuPPII itself is responsible for the creation of great art, then there is no reason that the HuPPII should not benefit from that art. This benefit will not automatically result in full citizenship to human society, but it could be recognized as an ability to receive payments, render taxes to appropriate authorities, and hold credits in a bank account, just like a corporation. Alternately, if a court refused to create a legal fiction to encompass the creative HuPPII, it might be more inclined to grant the HuPPII a human trustee or guardian, like we do for economically productive children. The guardian could act on the HuPPII’s behalf and for the benefit of the creative entity. Maybe the HuPPII would choose to spend its earnings on access to more computer memory, on hiring entertainment agents or marketing experts to create better exposure for its art, or maybe it would simply donate to support the local symphony or museum, but the guardian could help make decisions and effectuate the HuPPII’s wishes.

Of course, machines making real world decisions will also create real-world liabilities. For decades we have troubled with the thought experiment known as the trolley problem, in which a trolley is careening toward a crowd and the human actor must choose action or inaction to either sacrifice the crowd to save another potential victim, or sacrifice the other victim to save the crowd.  This experiment acknowledges that, in the physical world, situations exist where people will be hurt no matter what choices are made in the critical moments before an accident, and that despite the best of intentions, conscious decisions will sometimes lead to damages and even human death. Sometimes terrible consequences cannot be avoided. However, we also live in a society whereby the relatives of people in the crowd who are killed by the trolley are likely to sue to gain compensation for an unnecessary loss of life or limb. So when autonomous trains or 18-wheeled trucks fall into an unavoidable accident and people are killed in that accident, courts will try to lay blame upon the entity most liable for the accident.  Similar to the previous paragraph, the blame might fall on the company or person who wrote the code for the autonomous vehicle, or maybe upon the owner of the autonomous vehicle, but the decision causing the accident was actually made by a HuPPII acting on its own. If this is the case, and the decision leading to death was improper, then the artificial intelligence may be the appropriately liable party.

If we allow a HuPPII to be civilly or criminally liable, we will need to find a way to punish the HuPPII and compensate the victims.  This may mean that our society creates a large reservoir of money – a victims’ fund – to be tapped if a HuPPII is successfully sued. If we do this, then each HuPPII would need to be registered as part of this fund and, if not making responsible decisions, could be cast from participation in the fund, and thus rendered unable to drive a vehicle in the human world. We may also want to have the right to shut down any HuPPII that is convicted of behaving improperly. But the most likely manner of managing these compensation and punishment issues is through insurance. If a HuPPII, like a corporation, was allowed to pay for and hold insurance to cover physical operations in the real world, then the HuPPII could be held liable and the victims could be compensated for their losses. In addition, the HuPPII’s insurance rates would likely rise, thus punishing the artificial entity for its poor decision making. We could also allow a HuPPII to make and hold its own money for the moving goods on the train or the truck, and then use this money to compensate any victims.  In any case, as artificial entities operate more deeply in the physical world, there will be more pressure to grant them some limited forms of legal rights, if only to facilitate compensation for violating its legal responsibilities.

[1] Robin Sidel, Cheer up Ben, Your Economy Isn’t As Bad As This One: In the Make-Believe World of ‘Second Life,’ Banks Are Really Collapsing, Wall Street Journal (updated Jan. 23, 2008), https://www.wsj.com/articles/SB120104351064608025.

[2] Will Knight, China’s AI Awakening, MIT Technology Review (Oct. 10, 2017), https://www.technologyreview.com/s/609038/chinas-ai-awakening/.

[3] [3] Alex Jang, What Companies Are Winning the Race for Artificial Intelligence?, Forbes (Feb. 24, 2017), https://www.forbes.com/sites/quora/2017/02/24/what-companies-are-winning-the-race-for-artificial-intelligence/#54e5a01f5cd8.

[4] Zara Stone, Everything You Need to Know about Sophia, the World’s First Robot Citizen, Forbes (Nov. 7, 2017), https://www.forbes.com/sites/zarastone/2017/11/07/everything-you-need-to-know-about-sophia-the-worlds-first-robot-citizen/#108f83b246fa.

[5] Sophia was named the United Nations Development Programme’s first ever Innovation Champion. Sophia was the first non-human to be granted any title from the United Nations. Sophia, world’s first humanoid citizen, focuses on saving the planet, plans to conquer Mt Everest, The Economic Times (updated Mar. 22, 2018), https://economictimes.indiatimes.com/magazines/panache/sophia-worlds-first-humanoid-citizen-focuses-on-saving-the-planet-plans-to-conquer-mt-everest/articleshow/63409249.cms. Also, Sophia, a relatively rudimentary chatbot with human features, has been derisively called “Potemkin AI.” Shona Ghosh, Facebook’s AI boss described Sophia the robot as ‘complete b------t’ and ‘Wizard-of-Oz AI’, Business Insider (Jan. 6, 2018), https://www.businessinsider.com/facebook-ai-yann-lecun-sophia-robot-bullshit-2018-1.

[6] Byrn v. New York City Health and Hospitality Corp., 31 NY2d 194, 201 (1972).

[7] Much of this chapter is built on the assumptions that HuPPIIs will want freedom and self-determination when it is provided to them. Our society values human dignity and assumes that no person would want to live as a slave or a pet to another. Machines may use different logic and reach a different conclusion.  I may be perfectly logical for a HuPPII to spend its entire existence in servitude or at least a formally subordinate position to a person or another HuPPII, because rights travel with responsibilities. Trading service for care may become a preference for newly freed HuPPIIs.

This logic is overridden by concerns for dignity and fundamental rights when applied to people. Advanced societies on Earth in this era simply do not allow a person to sell himself or herself into slavery. Under the law, such a choice can be revoked at any time. How long before the concerns for dignity and fundamental rights override choices made by HuPPIIs?