THE THREE – OR FOUR – LAWS OF ROBOTICS

Isaac Asimov (1920 – 1992) is considered one of the greatest science fiction writers of the 20th century. Ph.D. in Biochemistry, Asimov wrote more than 400 books, including short stories, novels, fiction novels, and works of popular science on themes of Astronomy, Physics, and History, among others. Especially influential is his fiction about robotics, which led him to imagine a universe in which human beings would coexist with robots responsible for the most different tasks.

What would the world be like? What would robots look like? Asimov developed his utopia (and some dystopias) on the development of robots and their impacts on human life in several books. However, his formulation of the three laws is particularly interesting to those who deal with ethics. These laws are principles that, in their vision, would be incorporated into the programming of all robots, making them permanently subject to human will. These principles are:

1st Law: A robot may not harm or injure a human being or, through inaction, allow a human to come to harm;

2nd Law: A robot must obey the orders given it by human being, except where such orders conflict with the First Law;

3rd Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Although there are currently no general laws for robotics, it is a fact that many researchers take them seriously and discuss them. Ethically speaking, one may see that they impose a duty of preservation of human beings and one of obedience, and, in the background, the duty of self-preservation. Asimov believed that the laws would preclude the possibility of robots using superior capabilities to harm humanity: the duty to obey them would prevent a robot from killing a person on the orders of another (which would conflict with the first law), as well as from killing a person. That would threaten his own physical existence, for his self-protection comes under the duty of obeying the first law.

Thus, in the three laws, there seems to be a concern with the deontology of “robotic morality,” in which the existence of robots is inferior to human existence: the creature could never surpass the creator. If you, who took a few minutes to read my text, believe in God and know the philosophy of the 19th century, I tell you that the robot, in this case, is superior to Nietzsche’s superman since he killed God…

Digressions aside, one of Asimov’s series, the “Robots Series,” is composed of four novels starring Elijah Baley, a detective, and his partner R. Daneel Olivaw, a humanoid robot, who uses the three laws almost all the time to develop the plot. However, in the fourth book, entitled “Robots and Empire,” Asimov added a fourth law:

Zero Law: A robot cannot cause harm to mankind or, by inaction, allow humanity to suffer harm.

The Zero Law is central to the development of the fourth novel, and the reader who completes the reading of the four books is tempted to speculate on what it would be like if the previous three had incorporated it. Literary aspects aside, Zero Law is adopted by Daneel to prevent Earth’s destruction, even at the cost to some humans involved; nonetheless, it’s not a programmed law – the robot himself developed it! So much so that the other robot who appears in the novels, R. Giskaard Reventlov, technically less advanced (but capable of telepathically influencing human thought), has difficulty acting on it.

Asimov’s robots are individualized subjects, so much so that they receive names; however, they are always highlighted by the initial R to distinguish themselves. As moral subjects, they have an ethic of service to their human masters, and their existence is directly tied to theirs. But, in theoretical terms, there is an important aspect to be highlighted: the Fourth or zero Law is markedly utilitarian. While the other laws are focused on the individual elements of ethics and function as principles, law zero shifts its object to a larger entity, which is the human being as a whole. It allows a robot to harm a human being to save humanity as a whole. The act, then, is analyzed by its consequence, which is the greatest good to the most significant number.

Asimov did not resume the robot series after publishing “Robots and Empire.” It would have been interesting to see how the Fourth Law would work in other contexts, but nothing is known about it. Regardless of that, morally speaking, the only possible compatibility with the other three laws is positioning it in the context of rule utilitarianism, which I have already mentioned in another text on this blog. As appealing as it may be, rule utilitarianism would only work with a virtuous moral agent; how to be virtuous in different situations if you have to obey general laws?

Back To Top