Lawyers have not been thinking about robots as long as cartoonists, science fiction writers (Isaac Asimov's Robot series being perhaps the best known) or engineers (Geoff Simons, Are Computers Alive? Evolution and New Forms of Life 1983 – but see P. Sw irski, "A case of wishful thinking"). A political scientist anticipated some legal issues in the early 1980s (S.N. Lehman-Wilzig,”Frankenstein Unbound: Towards a legal definition of artificial intelligence”) notably about potential criminal liability that future technology would threaten.
However, we have been catching up in the past ten or fifteen years. This column is a partial survey of some of the interesting questions that have been asked about robots and the law in that period.
What is a robot? To qualify as a robot, a machine or device needs some artificial intelligence and some degree of autonomy from the direct control of the builder/owner/operator of the machine.
Let us look first at the robot, or the intelligent agent, as a legal intermediary. What happens when one contracts through a semi-autonomous machine? That is essentially what happens when one buys consumer products, and many business products, online. No human being considers the order before it is filled and a charge is sent through to the credit card company.
So who consents to those contracts? Whose intention is involved? The Uniform Law Conference of Canada – among others – considered those issues in developing the Uniform Electronic Commerce Act in the late 1990s. Ian Kerr wrote a study paper on the topic, “Providing for Autonomous Electronic Devices in the Uniform Electronic Commerce Act," in which he considered the law of agency, slavery and other analogies. The UECA ultimately provided in section 21 that a contract may be formed through the interaction of an “electronic agent” and a human being, or more than one electronic agents. The question of intent is papered over with this legal rule.
There was some speculation in the 1990s that the day would come in which robots would make their own contracts, or at least their owners would not necessarily be responsible for them. If computers modify their own instructions, then is it fair to the owners to hold them responsible, and is it fair to the computers not to recognize their own actions? It is fair to note that we are not there yet, at least in law.
Outside the field of contracts, artificial intelligence programs stock trading, for better or worse, and helps legal drafting as well. So far liability still seems to rest with the suppliers or programmers, who cannot hide behind the intelligence of their creations. Computers have not yet been held to be on a frolic of their own. On the other hand, the availability of credible AI as the basis of filters of pornography persuaded an American court that statutory sanctions against such pornography were unnecessarily restrictive of the freedom of expression. ACLU v Mukasey 534 F.3d.181 (3d Circ. 2008).
One can see that legal questions arise with increasing autonomy. Is there a point at which a robot is not just a machine that its owner or operator is liable for? Is the question always 'who turned it on?' or 'in whose interest did it run?'. What are the Robot rules? Is it foreseeable that a robot may act in unforeseeable ways? Or will we remain in a Rylands v Fletcher world, where a robot is analogous to something that may 'escape' from the owner's property (or control) and engage the liability of the owner as it causes damage elsewhere? At present, most robots are used in the military or in manufacturing, both areas where liability for injuries the robots cause is not suffered directly. (Workers' compensation rather than civil liability would cover injuries to workers.) That is bound to change.
At least one writers has suggested that to encourage innovation in the development of AI, manufacturers or programmers should have some protection from liability, along the lines of what online intermediaries have in the United States under section 230 of the Communications Decency Act, 1996. (Ryan Calo, “Robotics and the Law: Liability for Personal Robots”). The parallel seems forced to me: intermediaries do not create their own risks, they are only passing on text, and they are essential to the Internet as it now operates. If they publish their own work, they lose the immunity. Makers or operators of robots are much more involved in their creations and more fairly held responsible for defects.
If a robot is hacked, should the liability for damage it causes be shared between the hacker and the programmer who did not make the robot secure enough against such an attack? Is that any different for the allocation for any other malware? Nothing about the machine being a robot changes the responsibility for security – except possibly the foreseeability of an attack route if the robot has altered its own mechanics to make itself more vulnerable.
Encouragement of innovation in robotics may come through insurance, though a potential insurer has to be able to estimate risk. The state of the law makes that difficult today.
Perhaps some part of the response can lie in standards. One can keep AI crawlers off one's web site, and out of one's private information, with a 'robots.txt' command. No law makes compliance with that standard mandatory, however. Other law will have to deal with the consequence of deliberately programming around such barriers. (The impact of technical standards on the law of electronic communications has of course been explored in depth by Lessig and others.)
In any event, who should buy the insurance, and who should comply with the standards? Should the law put more of the risk on the builder or operator of the robots, or on those who interact with them? Do those who interact with them have much of a choice? Who is best able to avoid or reduce the risk?
Without pretending that these questions have found solutions, we may want to look at some more speculative questions as well. What would we need to decide that we face a new kind of legal entity? An “AiCE” – “autonomous intelligent cyber entity" ? A cyborg (a mix of human and computer)? Is it a matter of “legal convenience” when one decides to attribute legal personhood to a machine, as has been suggested? Are people becoming entranced with their own novel jargon for what is still just a machine, after all? Or do we need to contemplate rights for new entities in the information society? Even the popular press is asking, as the Globe and Mail did last month with "One Robot, One Vote?".
Or will the new entities take over, or rather will humans take them over to become more than human, as Ray Kurzweil and others propose, in the "Singularity" movement discussed in the New York Times this month? Will lawyers hasten to join the movement, or just see a whole new range of entities to sue or prosecute?
We are likely to need some real intelligence, and perhaps some artificial, to guide us through these issues in the foreseeable future.