Artificial Intelligence and Trust

Unlike all interactions of man with technology, Artificial Intelligence (AI) has added complexity to the dynamics and exchanges it establishes with people. Because it has a certain degree of autonomy in its responses, which happen through artificial neural networks, the connection that develops between AI and individuals needs to be better understood and deepened.

It is important to note that the actions are bidirectional in this system. The AI is an active participant in decision-making by providing primary directives for actions or suggesting courses of action. Even if in other cases, the final decision is executed or made by a human being, there is a need to justify the rejection of the indication provided by the AI. The existing exchange between a person and AI requires trust on the user’s part.

Another aspect to be considered in this relational model is the existence of uncertainty, which can generate insecurity on the part of the human agent. This uncertainty can affect decision-making and trigger the fear of not knowing how to act in the best possible way since it comes up against the apprehension of not knowing whether the best thing to do is to act based on the criteria indicated by the AI or to follow one’s own parameters, without having the a priori knowledge of which is the most appropriate decision.

Deciding well is a final action requiring a certain degree of security in knowing how to do or execute the act; however, in this new form of interaction, the decision restricted only to the person is shared with artificial intelligence systems. It demands trust in a technology that points answers and directs action. 

In this sense, more complex problems of an ethical nature arise. The lack of clear and cohesive foundations runs into perspectives of right and wrong, acting right or wrong, or being moral or not. These spheres, which are proper to the human being, represent the complexity existing in the associations with AI. Moral and ethical elements of the person influenced by technology are within the larger space of trust.

Finally, the complex dynamics established between human beings and AI raise some questions that deserve attention: 1. Does trust in AI affect human beings’ self-confidence when making decisions? That is, does the person feel secure in himself when trusting something devoid of morality?; 2. Trusting the answers offered by AI is important; however, how does it work in cases of complex decisions, where the main variable is the life of another human being?; 3. Can the existing interaction between human being and AI, in fact, be called trust since this term is characteristic of the reciprocal relationship between humans?

These questions demonstrate the need to better understand the person’s trust towards the AI, as well as the use of a term that better explains this dynamic. Meanwhile, the results of this interaction may generate uncertainties and raise ethical demands. The field remains open for further studies to offer directions to explain better what is actually happening between humans, trust, and AI.

Back To Top