XAI and the right to explanation

You may not even know how an algorithm works, but you must have heard this word. So does artificial intelligence (AI): is it a robot, a machine, a system? Many people have daily contact with modern technologies, which are immersed in different spheres of life and influence practically the entire social fabric. Still, few understand how it works.

However, we no longer deal with systems that serve as a tool to support the activities of human beings. Today, machines are autonomously making decisions thanks to machine learning that made it possible for algorithms to learn and develop. Thus, understanding how algorithms arrive at final decisions raises a debate about the right to explanation. It means that end users, who do not necessarily understand the technical languages, need to understand the results and outputs of the technologies.

In addition, it can be seen that, currently, performance is negatively correlated with explainability. It means that the higher the performance of an AI system, the lower its explainability (GUNNING, 2019). In this sense, explainable AI is one of the most recent areas of study that has emerged to satisfy practical, legal, and ethical expectations. Technologies of this type have been called XAI, and their characteristics involve the use, issues of responsibility, right to explanations, and autonomy, among other examples (Kim, 2018). As a moral right, the right to explanation exists beyond the impact of the result, which focuses on protecting the privacy of users in consent transactions and of third parties that may be involved in the events (Kim and Routledge, 2018). However, explainable AI doesn’t just mean transparent, interpretable, or comprehensive. That’s why human psychology has provided insights into the information needed to build reasonable XAI systems. These requirements pertain to what end users need to understand decisions to choose the best application (Gunning, 2019). In other words, in addition to satisfying ethical expectations, humans still need to know how decisions were made, as the results of machines can be too technical for them.

As a reflection, I propose an exercise. Try to identify in the technologies around you which ones use AI. Then try to understand the logic behind this system. You will see that it is not that easy, and this little reflection can give an idea of ​​how important it is to think about explainability for the current technological level in which we live.

REFERENCES

GUNNING, D. W. A. DARPA’s Explainable Artificial Intelligence Program. AI Magazine, 44–58. doi:10.1201/b10933-22, 2019.

KIM, T. W. (2018). Explainable artificial intelligence (XAI), the goodness criteria, and the grasp-ability test. 1–7. Available at: http://arxiv.org/abs/1810.09598.

KIM, T. W., ROUTLEDGE, B. R. (2018). Informational Privacy, A Right to Explanation, and Interpretable AI. Procedures – 2nd IEEE Symp. Privacy-Aware Comput. PAC 2018, 64–74. doi:10.1109/PAC.2018.00013, 2018

Back To Top