Roboethics and artificial Intelligence: The art of coming out of nowhere to go anywhere

The ethics of robots, or roboethics, is a strong candidate to become one of the typical fads of the early 21st century. It is born of a problem that does not exist, and intends to dewater in other problems that will never exist – or, in the worst case, are being examined by a misguided bias. We’ll explain why.

At first, it’s necessary to dismantle a fallacy that insists on following the debate about the ethics of robots, which goes by the name of “artificial intelligence”. This combination of words seems to taste nice for Hollywood, and what good it would be if it remained restricted to the field of fiction. But for the desperation of serious scientists, there are a lot of people at the academy and in the market insisting that, in fact, the machines – especially the robots – already can demonstrate rudiments of reasoning with today’s technology and, in future, will dominate thought such as human beings do.

If that were true, we’d be walking to the end of our monopoly on intelligence. And, being unlikely that the limited brain of men and women could win the competition with the “thinking machines”, there would be a risk of extinction of the human species.

All of this would be very troubling if it were minimally doable. But it’s not. Anyone with the least knowledge in programming logic discovers, by practical means, that a machine, whether it is a computer, smartphone or robot, is limited to responding to previously determined commands. If any action carried out by these devices resembles a rational decision, this is due to a anthropomorphic interpretation of a biased observer.

This is what happens when you receive the result of a search site and say: “This site found what I was looking for”. That is a rather inaccurate description of what, in fact, the search engine has accomplished. It simply made a logical comparison exercise to separate and display, within a list with millions of inputs, some few that have more resemblance to the inserted search term. Between this absolutely machined process and a philosophical elaboration of someone who has found a solution to a problem, there is a considerable distance.

That been said, let’s go back to roboethics, which assumption is that robots can: a) make ethically questionable decisions; or b) serve as an instrument for human actions whose results injure ethical and moral values.

As we have seen, one cannot take the item a) seriously, because no action taken by machines can be considered a really autonomous and rational decision. Then there is item b), which does not have enough depth to support a relevant debate.

To explain that lack of depth, we are worth a comic example: the hypothetical “knifethics”. This science would consist of the debate on the implication of the use of knives by humans, taking into account intriguing questions such as: “If a butcher gets hurt accidentally when cutting a piece of meat, can the manufacturer of the knife be blamed for that?”.

The “knifethics” is defeated by a simple counter-argument: because it is an object totally dependent on human intervention to generate any kind of result, whether good or bad, the knife does not require an intrinsic ethical debate. And even if a knife maker’s mistake led to the accident, industry’s legal and ethical codes are already sufficiently developed to deal with the issue.

Whenever you talk about deliberate actions and their consequences, the focus should be on who makes the decision. So you don’t have to raise a whole tangle of theories to discuss what robots should or should not do, but rather to invest in research regarding the vices and virtues that, motivating for good or for evil human action, will demand inanimate objects to become tangible acts. Let these objects be always considered as instruments, which is what they are, and never an end in themselves.

The views and opinions expressed in these articles are those of the authors and do not necessarily reflect the official policy or position of the AdmEthics Group

Back To Top