BAILII is celebrating 24 years of free online access to the law! Would you consider making a contribution?

No donation is too small. If every visitor before 31 December gives just £1, it will have a significant impact on BAILII's ability to continue providing free access to the law.
Thank you very much for your support!



BAILII [Home] [Databases] [World Law] [Multidatabase Search] [Help] [Feedback]

United Kingdom Journals


You are here: BAILII >> Databases >> United Kingdom Journals >> Book review: Robot Law – SCRIPTed
URL: http://www.bailii.org/uk/other/journals/Script-ed/13-2/13-2-Sousa_e_Silva.html
Cite as: Book review: Robot Law &#8211

[New search] [Printable PDF version] [Help]


Volume 13, Issue 2, August 2016

Book review: Robot Law

Ryan Calo, A. Michael Froomkin and Ian Kerr (eds)
Cheltenham: Edward Elgar, 2016. 424 pp. ISBN 978-1-78347-672-5. £105.

Reviewed by Nuno Sousa e Silva*

Download PDF

© 2016 Nuno Sousa e Silva

Cite as: Nuno Sousa e Silva, "Book review: Robot Law", (2016) 13:2 SCRIPTed 210 https://script-ed.org/?p=3142
DOI: 10.2966/scrip.130216.210


* Lawyer. Master of Laws, LLM IP (MIPLC), Assistant Lecturer, Oporto School of the Faculty of Law, Universidade Católica Portuguesa, Portugal

The robots are coming and we better be thinking about it.

Robot Law is a remarkable collection of texts on the futuristic topic of Robot Law, some of which have already been published in different formats. While not the first of its kind,[1] this book is undoubtedly a most important contribution to a field of study still in its infancy. Although this a law book, the analysis is heavily influenced by sociology, ethics, and psychology.

The book begins with an introduction by one of the editors, Michael Froomkin, and is then divided in five Parts. Part I comprises a single chapter — “How should the law think about robots?” — which is of an introductory nature. The authors start by discussing the notion of a robot and describe their known and foreseeable capabilities. They suggest that “robolaw” should learn from cyberlaw and focus on using the right metaphors, avoiding what they call the “android fallacy”, i.e. the tendency to assimilate robots to humans, legislating on the basis of form instead of function. This is explained by a simple example: a driverless car could either have an inbuilt “invisible” system or an android-chauffeur directing it. If one falls prey to the android fallacy, the liability of the manufacturer might be interpreted differently, more strictly in the inbuilt system scenario than in the chauffeur one.

This idea has a curious relationship with the case for extending legal protection to certain kinds of robots, as analysed in Chapter 9 by Kate Darling. The author believes that some of the rationales for protecting animals might also apply to social robots, i.e. robots designed to interact with humans at an emotional level. In fact, we have a general tendency to project emotions towards objects. However, with social robots there is easily an illusion of reciprocity. The chapter presents interesting examples to illustrate the issues arising from emotional attachment to robots. These range from the impact on human interaction (mainly the fear of replacement of human connection in tasks such as taking care of children or the elderly) to social manipulation. At the same time, the reasons to prevent and/or punish abuse of social robots can be found in the protection of societal values (discouraging any kind of violence), the impact that such an abuse can have on other people (mainly children), or the fact that its occurrence indicates a deeper dysfunction or inhumanity, such as a violent personality. Although robots are not sentient beings and cannot be said to have inherent dignity, there will probably be social sentiment towards them. Whether the reasons presented or the sentiments are strong enough to lead to “robot-protection laws” is hard to say at this point.

We tend to be fascinated by evil. Robots are often portrayed in popular culture as somehow menacing. However, this is far from reality. Robots are tools and are conceived to react to stimuli in order to perform useful functions better than humans. Nonetheless, damages can and will occur.

In one of Isaac Asimov’s many remarkable short stories, “Galley Slaves”, a robot, responsible for correcting proofs, has supposedly changed the content of a book of a renowned university professor, who now seeks damages. The liability of robots themselves could accompany the tendency to turn them into subjects. Nonetheless, it does not seem likely that we will we have robot trials in the same way that animal trials occurred in the 18th century.[2] Of course, if we go so far as to establish some kind of legal personality for robots, we might also evolve towards recognising their liability. However, the liability issues discussed in Robot Law revolve around human responsibility only. This is a complex topic because robots are evolving towards stronger forms of autonomy, possessing learning capabilities and a certain degree of self-determination. Part II of this book deals with these issues.

In Chapter 2, F. Patrick Hubbard argues that the current US liability system provides a fair and efficient solution, which does not hinder innovation. His analysis, deriving from law and economics, discusses the adaptions that will occur due to, among other things, the unpredictable behaviour of robots. As robots will have learning abilities and will interact with other robots, systems, and natural beings, the main problems will reside in gathering adequate evidence. Notwithstanding, the systems can and perhaps should be designed to record the relevant data, mitigating the problem (but creating others such as privacy intrusions). Another shift to expect is the increase in the liability of manufacturers (under product liability laws). This will result from the diminishing role of human beings in the performance of dangerous activities such as driving. In spite of these expected changes, the author argues against proposals for fundamental change, defending the balance the current legal solution achieves.

In Chapter 3, Curtis Karnow argues that foreseeability is at the heart of tort liability (both negligence and strict liability) and therefore it becomes very difficult to apply it to autonomous robots, defined as robots that generate their own heuristic (i.e. that are capable of self-learning). These are very complex machines interacting with a universe that is even more complex. In order to ease the application of tort law, Karnow suggests that robots should be programmed to include so-called “common sense” (i.e. general knowledge, intuitions and beliefs held by the majority of human beings in a given culture at a given time) and the ability to acquire, check and correct beliefs they might have. At the same time, he stresses that the more humans interact with robots, the easier it will be for us to predict robot behaviour and act accordingly.

Chapter 4 aims to establish the core concepts of systems, language, use, and users. These are central notions for the regulation of robots and a clear definition thereof should allow for better communication among the parties involved with the topic, namely lawyers and engineers. Ending Part II of the book, Chapter 5 discusses the moral (and also legal) problems that arise from robots being significantly “better” at tasks than humans (these are called “robot experts”). The question is if and to what extent can and should humans relinquish control to such robots. The choice is difficult to make, given that robots are what Janson Milar and Ian Kerr call “unpredictable by design”. Yet, one can doubt if there is not a moral or even legal duty to rely on robots (e.g. asking and following medical advice from a robot) if this proves to be more accurate. The first approach to this evolution will likely be what the authors call “co-robotics”, the simultaneous use of human and robot experts. This is likely to pose particular problems in scenarios of disagreement, which the authors analyse. In case damages arise out of prevailing robotic advice, the authors believe it will be almost impossible to understand the processes of causation and therefore finding liability is unlikely.

The book then moves into Part III, addressing the social and ethical meaning of robots. As robots will come into contact with humans more often, there is a need to develop not only ethical but also polite robots. The way to satisfy these needs is explored in Chapter 6. It is suggested that bottom-up “Oben Robethics” initiatives such as the one described in the chapter (developing a decision model for a specific social interaction scenario) might be a valuable approach. Chapter 7 deals with the merits and challenges of open source robotics. At the heart of this discussion, there are not only commercial interests but also risk management concerns. Diana Marina Cooper believes that adopting an open model is necessary to ensure the success of robotics as an industry. In order to encourage this, she follows the suggestion of Ryan Calo to establish certain immunities for open robot manufacturers, similar to the ones established for ISPs, and combining this with a mandatory insurance scheme for robot owners. After discussing some of the ethical concerns of open robotics, she presents a draft of an “Ethical Robot Licence”, which addresses the main issues identified as possible problems.

Sinziana M. Gutiu addresses the impact of “sexbots” (robots designed for sexual interaction) in Chapter 8. The standpoint adopted is a feminist one. The author only deals with female sexbots (called “gynoids”, i.e., androids with a female appearance) in their interaction with heterosexual males. The chapter highlights how gynoid sexbots are harmful to women, reinforcing stereotypes and furthering inequality by design, and diminish the social and legal importance of consent.

It is interesting to note that robotics will not only shape law but is already being used to enforce law and solve disputes. It is expected to do much more in this area. This is the main topic of Part IV. Chapter 10 analyses the possibilities and limits to automated law enforcement. Do we want to live in a society where there is no freedom to break the law (be it due to perfect ex ante or ex post controls)? Can that society even be called free? The authors analyse this topic very convincingly and present a comparison of what is (currently and in the foreseeable future) technically possible and what is implemented now in terms of law enforcement measures, ranging from an innocuous notice to lethal executions. They conclude that although there might be significant advantages in its use, provided adequate safeguards are implemented, we are not yet socially, politically, or legally prepared for the automated enforcement of law.

Chapter 11 describes an experience of coding New York State traffic law and its application with satisfactory results — but at a cost: relevant unexpected factors might be ignored. As highlighted, justice should not be blind (i.e. if robots apply law they might often ignore relevant circumstances) and the human factor is not easily dispensed with. The use of robots in interrogation, the subject of Chapter 12, is just another example of the legal and moral challenges posed by using robots in law enforcement activities. The constitutional challenges for the right to privacy and silence addressed by Kristen Thomasen cannot be easily answered; thus she urges caution and reflection before their deployment.

The last two chapters comprise Part V, which explores the topic of robots and war. If there is a field of human activity in which robots can replace humans with considerable advantages, it is warfare. These advantages lead to the inevitability of “killer robots” and some argue it might lead to a higher level of respect for the rules of war. At the limit, wars could become much like a game (Battlebots) of epic proportions. Of course, reducing the human costs of armed conflict could lead to an increase in armed conflict (lowering barriers to entry). The concern is that killer robots will not only kill other robots, but might kill humans as well and, worse, be equipped with autonomy to decide who lives and who dies. Ian Kerr and Katie Szilagyi cover these questions from the viewpoint of humanitarian laws and philosophy in Chapter 13. They believe that there is a pressing need for an international agreement that addresses some of the problems identified but they do not have a strong opinion on how these should be tackled. The last chapter by Peter Asaro departs from this conclusion and argues for the adoption of laws that pre-empt autonomous weapons on the basis of the Martens Clause[3] and customary law. He believes humanitarian laws oppose all use of robots in warfare unless if under meaningful human control.

Reading Robot Law and thinking about the multiple interactions of robots and the law sometimes takes one far beyond the horizon of present-day reality. The issues lend themselves to philosophical reflection and to careful meditation. This book is a stimulating read and would be valuable even if written many years in the past. But as the future is coming quickly (if one believes in Ray Kurzweil,[4] exponentially so), we may well find ourselves with very practical issues to debate and decide rather soon. Turning to these pages will certainly be a good starting point to deciding the appropriate regulation of robots.

[1] See e.g. U Pagallo, The Laws of Robots: Crimes, Contracts, and Torts (Dordrecht: Springer, 2013); A Bensoussan and J Bensoussan, Droit des robots (Bruxelles: Larcier, 2015).

[2] From the 13th through the 18th century, it was not uncommon in Europe to judge animals and convict them. On this topic, see E Cohen, “Law, Folklore and Animal Lore” (1986) 110 Past and Present 6-37. See also E Evans, The Criminal Prosecution and Capital Punishment of Animals (London: W. Heinemann, 1906).

[3] The Martens Clause first appeared in the Preamble to the Hague Convention II on The Laws and Customs of War on Land in 1899, and was proposed by the Russian diplomat Friedrich Fromhold von Martens. Its original formulation was:

Until a more complete code of the laws of war is issued, the High Contracting Parties think it right to declare that in cases not included in the Regulations adopted by them, populations and belligerents remain under the protection and empire of the principles of international law, as they result from the usages established between civilized nations, from the laws of humanity and the requirements of the public conscience.

This has been interpreted as a general remission to principles of morality or natural law. However, the exact meaning of the clause has been a source of contention among scholars.

[4] See e.g. R Kurzweil, “The Law of Accelerating Returns” (2001) available at http://www.kurzweilai.net/the-law-of-accelerating-returns (accessed 12 Jul 16).

Book review: Robot Law


BAILII: Copyright Policy | Disclaimers | Privacy Policy | Feedback | Donate to BAILII
URL: http://www.bailii.org/uk/other/journals/Script-ed/13-2/13-2-Sousa_e_Silva.html