“It’s the machine’s fault”: how intelligent is AI when it comes to making decisions?“

AI appears to be a neutral instrument that can be used positively and negatively, in this case implying control and dehumanization. Although discussions about the use of artificial intelligence (AI) have intensified in recent years, back in the 1940s, mathematician Norbert Wiener published a book with numerous questions involving cybernetics. Wiener presented a new order of reality for interpreting facts and introduced terms such as cyborg and cyberspace (Kim, 2004). Intending to propose a broader view of the study of AI and its interface with ethics, this essay invites us to reflect on artificial intelligence and its limits when faced with human intelligence.

            According to Wiener (1970), man could not surrender to the machine unless he had first examined its laws of operation and ensured that his conduct obeyed acceptable principles, highlighting the concern, at the time, about the machine’s dominance over human conduct. Currently, there is difficulty in establishing the concepts of AI due to the complexity of defining its limits, given that the benefits, opportunities, and threats of new technologies share the stage with consequences that are, for the time being, unknown (Bertoncini and Serafim, 2023).

            For Sichman (2021), AI is linked to computer science/engineering and aims to develop computer systems that solve problems using different techniques and models. Although the term artificial intelligence is widely used, the term automated decision systems (ADS) has also been adopted because it takes into account its technological characteristics (Mökander and Floridi, 2022). According to Coeckelbergh (2020), the basis of AI intelligence is software, an algorithm or combination of algorithms that process information created by humans for specific purposes (Hanna and Kazim, 2021).

            From Bullock’s perspective (2019), intelligence is only related to solving complex problems, indicating it as an independent substrate that can exist both in a biological process informing human actions and in a mechanical process guiding an automaton. This is the way many see AI. However, artificial intelligence is not intelligence in the broad sense, capable of also understanding metaphysical reality; it is limited. On the other hand, human decisions are not restricted to calculations and patterns, and intangible factors are considered. Unlike machines and animals, man can decide by considering immeasurable variables.

            Based on Greek and medieval philosophy, Sellés (2011) admits the immateriality of intelligence, refuting the materialist thesis that thought is restricted to brain activity. For Sellés (2011), the philosopher Leonardo Polo manages to resolve this issue by indicating that intelligence is not the totality of the soul or the human person since thought and reasoning are characteristics of this faculty. It is clear that the human person is not reduced to this. Intelligence goes beyond the threshold, and the limit is only known by transcending it. In this way, intellectually thematizing the notion of limit transcends it. From an Aristotelian perspective, the soul is everything, and intelligence can know everything without restriction. However, what is material, by definition, is limited; consequently, intelligence is immaterial. If intelligence is an immaterial faculty, then there’s no way a machine can be intelligent.

            One example distinguishing human intelligence from artificial intelligence is that machines tend towards continuous improvement, while man can choose to lose or go backward. In this sense, man is the only being capable of acting differently from instinct or, considering AI, from what has been programmed. In this way, even if calculations direct man towards a particular decision, he could choose to act differently by taking intangible variables into account. On the other hand, automated decision-making systems constantly strive for improvement and are incapable of acting against themselves or making sacrifices for the greater good. A robot, therefore, will never contradict itself; man will, whether intentionally or not.

            The scholastic tradition based human specificity on the capacity for abstraction. In other words, even if instinct or memory tends in one direction, man can choose the opposite path. Thus, the animal’s instinct always makes it say yes to nature, whereas man can say no, and precisely in this “no” lies the index of his greatness and the opening of his elevation. Therefore, “man can frustrate his duty to be. The duty-to-be of animals is fatal because they obey instincts. But that of man is frustrable because he is intelligent and has the will” (Santos, 2003, p. 103).

            Animals obey instincts, and machines obey the performance of AI patterns and programming, the latter incapable of creation. In this sense, as the human mind cannot conceive another greater mind, AI cannot be superior to the human mind. AI systems can show superiority in quantitative aspects, such as doing more calculations in less time, but not in qualitative aspects. On the capacity for creation, Santos (1962, p.50) says:

For scholasticism, man is a creature and was therefore created. And like every created being, he is subsequent to the One who creates him, to the Being who precedes him. Furthermore, regarding knowledge, Marxists should know that Aristotle and St. Thomas accepted that […] there is nothing in the intellect that has not first been in the senses, which is an empiricist statement.

(Santos, 1962, p.50)

            The more virtuous individuals are, the freer they are because they don’t bend to vices or passions. A machine cannot be free. When we talk about ethics in AI, we refer to those who developed the tool or those who use it. In a Thomistic Aristotelian view, ethics is a human attribute, as it requires freedom, conscience, and intentionality to make a moral decision. Virtue ethics (VE), mainly discussed in the Nicomachean Ethics, seeks to understand what is appropriate for the agent, considering the intention, the means, and the purpose (Aristotle, 1973). The EV are the norms (reason), goods, and virtues in connection. Thus, when there is an emphasis only on goods, disconnected from norms and virtues, reducing man’s motivation to search for pleasure and escape from pain, there is utilitarianism; when there is an emphasis on norms, disconnected from goods and virtues, there is deontology; when there is an emphasis on virtues, disconnected from goods and reason, there is Stoicism. Unlike utilitarianism and normativism, which are generalizing, there is no way to transform the virtues into algorithms for the machine to apply.

            Furthermore, human responsibility cannot be outsourced to a machine; it is not technology that will save humanity and solve moral problems. What’s more, there’s no point in providing technological solutions when the problem results from abandoning virtues and worshipping materialism, a philosophical error in thinking that potency generates the act when it is the act that activates the potency. There is, therefore, no way of relying on fragile promises that a technological future will bring world peace and save people, an effect already identified by Mário Ferreira dos Santos when he states:

The valorization of mechanical memory has led to an exaggerated valorization of cybernetics, in which excessive hopes are placed. No one can deny that cybernetics can be of extraordinary help to scientists in mechanical memory. It can make up for deficiencies in this area since it is common for the most intelligent people to lack the highest degrees of mechanical memory. But cybernetics will never surpass eidetic memory, the creation of ideas, or well-understood dialectics. It is an auxiliary of great resources but in a specific field. Pretending that it can replace the human brain is the most foolish idea that could arise and a manifestation of the worst kind of intellectual barbarism.

(Santos, 2016, p. 26)

            For Polo (2002), cybernetics is a positive science intended to have an operational value. This means that machines can serve humans but cannot be responsible for the decisions made. In this way, using Aristotelian Thomistic logic, we have tried to demonstrate that AI cannot replace human intelligence because intelligence is an immaterial faculty, not just mechanical and restricted to a brain organ. Intangible factors, such as moral feelings and the imaginative universe, are also part of human decision-making. Moral decisions can lead to regret or satisfaction, which are human attributes that contribute to the moral development of the agent. On the other hand, the machine may follow a normative or utilitarian ethical standard, but it is incapable of transcending or exercising virtues. So, despite materialism’s efforts to destroy metaphysics, placing reason and nature above anthropological transcendentals, it is in the heart that Truth is found, and those who seek it will find it.

REFERENCES

ARISTÓTELES. Ética a Nicômaco. Tradução de Leonel Valandro e Gerd Bornheim. São Paulo: Abril Cultural, Vol. IV: Os Pensadores. 1973.

BERTONCINI, Ana Luize Corrêa. SERAFIM, Mauricio C. Ethical content in artificial intelligence systems: A demand explained in three critical points. Frontiers in Psychology, v. 14. 2023. Disponível em: https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1074787. Acesso em 10 de junho de 2023.

BULLOCK, Justin. Artificial Intelligence, Discretion, and Bureaucracy. The

American Review of Public Administration. 49. 2019. Disponível em:

https://justinbullock.org/wp-content/uploads/2022/09/Bullock-2019-Artificial-

Intelligence-Discretion-and-Bureaucracy.pdf. Acesso em: 14 de julho de 2023.

COECKELBERGH, M. (2020). AI Ethics. The MIT Press Essential Knowledge Series. Cambridge, MA: The MIT Press.

HANNA, R. & KAZIM, E. Philosophical foundations for digital ethics and AI Ethics: a dignitarian approach. AI and Ethics, 1, 405-423. 2021. https://doi.org/10.1007/s43681-021-00040-9 Acesso em: 21 de junho de 2023.

KIM, J. H. Cibernética, ciborgues e ciberespaço: notas sobre as origens da cibernética e sua reinvenção cultural.Horizontes Antropológicos, v. 10, n. 21, p. 199–219, jan. 2004.

MÖKANDER, J., FLORIDI, L. (2022). From algorithmic accountability to digital

governance. Nat Mach Intell (2022). https://doi.org/10.1038/s42256-022-00504-5.

POLO, Leonardo. La cibernética como lógica de la vida. Studia Poliana, 2002, n. 4, pp. 9-17.

SANTOS, Mário Ferreira dos. O Problema Social. IX VOLUME da Coleção Problemas Sociais. Editora Logos. São Paulo: 1962.

SANTOS, Mário Ferreira dos. Invasão Vertical dos Bárbaros. Coleção Abertura

Cultural. São Paulo: É Realizações, 2016

SANTOS, Mário Ferreira dos. Cristianismo: a religião do homem. Bauru, SP:

EDUSC, 2003.

SELLÉS, J. F. Antropología para inconformes: Una antropología aberta al futuro. Pamplona: Ediciones Rialp, 2011.

SICHMAN, J. S. Inteligência Artificial e sociedade: avanços e riscos. Estudos

Avançados, v. 35, n. 101, p. 37–50, jan. 2021.

WIENER, N. Cibernética e sociedade: o uso humano de seres humanos. São Paulo: Cultrix, 1970.

Back To Top