By Ana Luize Corrêa Bertoncini (based on the closing lecture given for the “Special Topics III – Politics and Public Policy” course in the Public Administration program at ESAG/UDESC, semester 2025/1)
In the contemporary debate on integrating technology into public management, we face a dangerous form of reductionism. Frequently, so-called “artificial intelligence literacy” (or AI Literacy) is reduced to the mere instrumental capacity to understand codes or operate generative tools. However, this technical definition is insufficient to address the challenges of contemporary democracy.
True AI literacy does not require public managers or citizens to become data engineers. Instead, it involves developing socio-technical and critical competence capable of questioning the logic underlying systems and recognizing an uncomfortable truth: algorithms are not neutral. As Cathy O’Neil (2017) warns, they are “opinions embedded in code,” crystallizing worldviews that can perpetuate historical inequalities and power asymmetries.
The silent risk of automation in public decision-making is not technical failure but the outsourcing of our moral agency. By delegating sensitive allocative choices, such as who receives a benefit or who should be audited, to “black box” systems, we risk negating the prudence and ethical virtue necessary to weigh the human context.
Transferring this responsibility to machines that lack genuine moral experience is a critical error, as Serafim et al. (2024) point out. Ethical competence in AI, therefore, paradoxically lies in knowing where the machine should not act. A manager skilled in AI literacy resists “technological solutionism” – Morozov’s (2013) term for the attempt to apply mathematical corrections to complex social problems – and identifies when statistical efficiency harms human dignity.
Literacy or Crisis?
The absence of this critical competence leads to measurable consequences. In synthetic simulations applied to education and research contexts, for example, Bertoncini et al. (2025) demonstrate an alarming trend: in the absence of robust literacy, 78% of human agents accept AI decisions without review, victims of automation bias, and the belief in technological superiority.
Evidence from an empirical study brings a crucial nuance: passivity towards technology is usually broken only by trauma. In a study conducted in the Netherlands, Alon-Barkat and Busuioc (2023) found that blind trust in the algorithm decreased significantly only after the repercussions of a major government scandal. The scandal acted as an ethical “wake-up call,” but at an unacceptable social cost.
This puts us before a pragmatic choice: will critical awareness be forged by preventive education or imposed by administrative disaster?
AI literacy is, ultimately, about exercising what Bankins (2021) calls “meaningful human control.” It is about choosing autonomy to intervene before harm occurs, ensuring that technology serves the common good rather than eroding public accountability.
REFERENCES
ALON-BARKAT, S.; BUSUIOC, M. Human–AI Interactions in Public Sector Decision Making: “Automation Bias” and “Selective Adherence” to Algorithmic Advice. Journal of Public Administration Research and Theory, v. 33, n. 1, p. 153–169, 2023.
BANKINS, S. The ethical use of artificial intelligence in human resource management: a decision-making framework. Ethics and Information Technology, v. 23, n. 4, p. 841–854, 2021.
BERTONCINI, A. L.C.; MATSUSHITA, R.; DA SILVA, S. AI, Ethics, and Cognitive Bias: An LLM-Based Synthetic Simulation for Education and Research. AI in Education, v. 1, n. 1, p. 3, 2025.
CETINDAMAR, D. et al. Explicating AI Literacy of Employees at Digital Workplaces. IEEE Transactions on Engineering Management, v. 71, p. 810–823, 2024.
MOROZOV, E. To save everything, click here: the folly of technological solutionism. New York: PublicAffairs, 2013.
O’NEIL, C. Weapons of math destruction: how big data increases inequality and threatens democracy. New York: Crown, 2017.
SERAFIM, M.C.; BERTONCINI, A.L.C.; AMES, M.C.; PANSERA, D. Inteligencia Artificial (de)generativa: Sobre la imposibilidad de que un sistema de IA tenga una experiencia moral. Scripta Theologica, v. 56, n. 2, p. 467–502, 2024.
Texto desenvolvido com auxílio do Gemini 3 Pro. A autora garante a autoria e a revisão de todos os dados antes da divulgação da versão final.
