With the changes in the global socioeconomic paradigm generated by the development of AI over time, we face the Digital Age and the rapid growth of intelligent algorithmic systems. Because it is a current topic of high complexity, there are several concerns and considerations about these technologies. According to Floridi et al. (2018), it is no longer a question of whether AI will significantly impact society; it will affect any way. Several reflections on AI’s capacity, intelligence, and responsibility originate in this process of applying these technologies.
Many researchers consider hypotheses about the ability of AI systems to create individual decisions and act independently like human beings, pondering whether there is, in fact, the possibility of machines obtaining the ability to produce autonomous “thoughts.” According to Makridakis (2017), it is possible to state that machines are also intelligent since they can be instructed to obtain simple decisions and pre-programmed activities based on logic. However, machines operate at a different level of rationality than human beings.
The AI we currently see is restricted to the use of instrumental rationality, that is, at a more operational and calculating level of intelligence, internalized in machine learning by a series of actions organized to achieve predetermined goals. On the other hand, human beings use substantive rationality,mcharacterized by the ability to make individual decisions and with ethical discernment, also represented by critical thinking and awareness of the surrounding dilemmas, self-realization, autonomy, and value judgment of the actions carried out.
Thus, human beings in the decision-making process have moral action; that is, they are morally aware and able to understand the impact of that action, they act intentionally. In contrast, AI systems lack intentionality or moral understanding because they simply cannot be formed by machines (COECKELBERGH, 2020). In reality, machines do not understand fundamental moral principles and just execute formal procedures that expose the moral aspects of the programmer. Because it is still an uncertain topic, some authors question whether or not AI with an “artificial consciousness” could be considered a moral subject (OLIVEIRA and COSTA, 2018).
Given the above, Coeckelbergh (2020) also explains that AI can carry out actions and decisions with ethical consequences. However, they are unaware of what they do because they do not have the moral capacity or responsibility for what is done. Thus, if the AI cannot accumulate experiences and learn from its mistakes, perhaps it would not be suitable for more complex attributions involving the ability to think autonomously and with value judgment. It is understood that for AI to work, the data within the machine learning process are loaded with values, perceptions, and experiences of those who develop them; these algorithms are not neutral.
Therefore, as a reflection of society in general, permeated by inequalities and exclusions, the use of these algorithms becomes biased, which may reinforce even more prejudices and discrimination (COECKELBERGH, 2020). The impact of this information coming from programmers and other dubious sources creates a problematic scenario for several individuals since it is not known who will be privileged or left aside by the use of AI, which can cause damage, mainly those who are already marginalized in society.
Due to the context of possible damages caused by the application captiously, there are still obstacles to identifying who would be responsible for any consequences caused by AI, which could be users, programmers, or even artificial intelligence itself, making accountability and punishment difficult. Souza and Jacoski (2020) expose concerns about how we can control these algorithms so that they do not interfere with individual rights or that the damage generated is curbed, even if they are conducted with good intentions during the process.
These observations also reverberate in the difficulty of establishing and applying ethical principles that can guide AI to ensure and strengthen individual rights, minimizing the damage generated by the outputs of algorithmic systems. According to Martin and Galaviz (2022), ethical principles establish the duties each must fulfill and which limits must not be exceeded. Based on them, the development possibilities of AI can be improved, including more morality, impartiality, equity, and transparency in actions in society.
Therefore, it is understood that AI has a role of great importance in the performance of activities and in the development of improvements in human life is done. It is also necessary to envision future risks when materializing the hypotheses of AI evolution and the ongoing concern that it perpetuates due to the difficulty of finding answers to questions such as: who will control AI and who will be held accountable? How to eliminate data bias, if possible? How to anticipate or at least reduce the errors generated? What if, one day, the AI manages to have autonomy and conscience? Will it somehow self-regulate or be punished for the damage caused?
COECKELBERGH, M. All about the Human. In: Coeckelbergh, M. AI Ethics. Cambridge, MA: The MIT Press, .
FLORIDI, Luciano et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds And Machines. EUA, p. 690-707. 26 Nov. 2018. Disponível em:
GALAVIZ, C.; MARTIN, K. Moral Approaches to AI: Missing power and marginalized stakeholders. EUA, (University Of Notre Dame): Ssrn, 2022. Disponível em:
MAKRIDAKIS, S. The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms. Futures, v. 90, p. 46-60, 2017. Disponível em:
OLIVEIRA, Samuel Rodrigues; COSTA, Ramon Silva. Pode a máquina julgar? Considerações sobre o uso de inteligência artificial no processo de decisão judicial. Disponível em:
SOUZA, C. J.; JACOSKI, C. A. Propriedade intelectual para criações de inteligência artificial. Brazilian Journal of Development, Curitiba, v. 6, n.5, p. 32344-32356, 2020.