AI Ethics is part of digital ethics (Hanna & Kazim, 2021). It seeks to provide guidelines for action in the design and use of artificial automata or artificial machines by rationally formulating and following principles or rules that reflect our essential individual and social commitments and our highest ideals and values (Hanna & Kazim, 2021).
Although the most commonly used term is artificial intelligence systems, the term automated decision systems (ADS) has been adopted, considering its technological characteristics, as in the Algorithmic Accountability Act of 2022, proposed in the US Congress (Mökander & Floridi, 2022). The basis of AI intelligence is a software: an algorithm or a combination of algorithms (Coeckelbergh, 2020), which process information and have been created by humans for purposes determined by humans (Hanna & Kazim, 2021).
The concern about the ethical impacts of AI, as well as the discussion about its acceptance by society, have contributed to the emergence of different frameworks for the application of ethical principles, distinguished into three main groups: (1) ethics by design, which proposes to integrate ethical decision routines into IA systems; (2) ethics in design, which proposes development methods that support an assessment of ethical implications; and (3) ethics for design, which seeks to ensure integrity on the part of its developers (Hagendorff, 2022). Such approaches are grounded in core principles recommended by AI ethics thinkers. Among the principles most articulated in AI Ethics are fairness, accountability, explainability, transparency, privacy and safety, and the common good, among others (Hagendorff, 2020). Notably, such principles are relevant to guide the development, use, and impacts of AI. On the other hand, they assume a more abstract feature and may represent a challenge for their consideration in decisions and actions of professionals in the field (Rochel & Evéquoz, 2021). According to Rochel and Evéquoz (2021), there is an imbalance in the literature between the attention given to recommending principles (what) and how to operationalize them (how).
In practice, when seeking to answer “how”, AI Ethics requires considering aspects of reality that are not very predictable, such as context dependency in the process of realizing codes of ethics, different requirements for different stakeholders, as well as the demonstration of ways of dealing with conflicting principles or values, for instance in the case of fairness and accuracy (Hagendorff, 2022). Based on this, an ethical framework such as virtue ethics can be contributory.
According to Hagendorff (2022), virtues are the precondition for putting ethical principles into practice and can help access the ethical implications of the large-scale use of AI, in its learning from data and in changes it brings about in human-computer interaction.
Coeckelbergh, M. (2020). AI Ethics. The MIT Press Essencial Knowledge Series. Cambridge, MA: The MIT Press.
Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines, 30:99–120 https://doi.org/10.1007/s11023-020-09517-8
Hagendorff, T. (2022). A Virtue-Based Framework to Support Putting AI Ethics into Practice. Philosophy & Technology, 35-55, 1-24. https://doi.org/10.1007/s13347-022-00553-z
Hanna, R. & Kazim, E. (2021). Philosophical foundations for digital ethics and AI Ethics: a dignitarian approach. AI and Ethics, 1, 405-423. https://doi.org/10.1007/s43681-021-00040-9
Mökander, J., Floridi, L. (2022). From algorithmic accountability to digital governance. Nat Mach Intell (2022). https://doi.org/10.1038/s42256-022-00504-5
Rochel, J. & Evéquoz, F. (2021). Getting into the engine room: a blueprint to investigate the shadowy steps of AI ethics. AI & SOCIETY, 36, 609–622. https://doi.org/10.1007/s00146-020-01069-w