What is Artificial Intelligence? (Part 3)

I started this series of posts here, discussing the creation of the term “artificial intelligence” at the Dartmouth conference, focusing on the words and the scientific context. Subsequently, I moved from the historical context to the broader history here, showing that artificial intelligence has been in people’s imaginations for much longer, not just in the scientific literature. 

But why not write a simple text with a complete concept of what AI is? For me, the answer is that it’s still not possible. Many who try to do so oversimplify, cherry-pick only a part of the whole, reduce it, or get lost along the way. The number of distinct concepts is immense, and over time, I realized that not all of them were wrong. What happened is that each author was defining AI from a different point of view, perspective, or simply a part of it. It’s not hard to see why people are confused about what AI is (examples here: students and workers)

There’s another complicating factor: the speed of changes. Professor Silvio Meira predicts that “artificial intelligence will soon reach the complexity of philosophy” (Meira, 2024). Using the public release of ChatGPT in 2022 as a temporal milestone, marking the boom of AI for mass use, Meira states that “today we are in the stone age of AI, but the future arrives in 800 days.” I’m not so sure about this timeline and finish line (I’m even more skeptical after reading the book “The Cult of Information”1), but the speed of transformations is undeniable. To give some examples, from November 2022 until today, ChatGPT has already reached the mark of 100 million users (Meira, 2024); the GPT-2 from 2019, considered a pioneer, required 1.5 billion parameters and a training cost of $50,000, but just three years later, in 2022, PaLM was released with 540 billion parameters and a cost of $8 million (Maslej et al., 2023).

We are indeed in the stone age of AI, sometimes perplexed and others fearful, still learning to deal with our own inventions. However, the difference between our situation and that of our Paleolithic ancestors is that the pace of changes is now measured practically in days. How can we keep up and understand all this? Moreover, how can we learn to use and apply AI in our lives? There’s no shortage of news about hallucinations, black box models, and the future job market. Even so, I believe there’s no need to act out of fear or trial and error. That is, we don’t need to go around hitting ourselves in the head with “stones” to see what happens!

When I started studying the subject 3.5 years ago, the lack of consensus and different descriptions of what AI is confused me incredibly. From my experience, I opted for this lengthy preamble and parts I and II before getting into technological topics. This reflection is my suggestion for starting to develop a critical sense to distinguish facts from pretensions, technology from a field of study, and also to understand the benefits and limitations to “dot the i’s and cross the t’s” regarding the conscious use of AI. That’s why looking at AI as a whole is much more realistic and helpful than seeking a single concept.

First of all, by understanding the trajectory and context, I believe it becomes much easier to comprehend all the nuances of this young field (about 70 years old). As a scientific area, AI is interdisciplinary and encompasses both recent disciplines, such as computing, cybernetics, and cognitive psychology, as well as previous studies in philosophy. From this characteristic of a scientific field, AI seeks to understand human intelligence to replicate it (Franklin, 2014). In my view, interdisciplinarity is one of the great strengths of the field.

Then, it is necessary to understand the technical part of AI as technology because it is already everywhere and often invisible, integrated into devices (Kim et al., 2021). In this context, it is necessary to have an idea of which technologies are developed by the AI field, visible or not. I guarantee that this understanding has little to do with the technical knowledge of programmers. In 1988, Roszak contested how the spread of personal computers created a belief in the obligation to include computer literacy in education. After some time, we realized in our daily lives that it is not mandatory to know how to program to use computers and related devices. Programming was left to programmers and other technicians. Now, we talk about AI literacy: same story, new terms.

Under this broad umbrella of the AI field, there are various technologies, and their names, usually in English, help us understand them somehow. For example, in this enlightening interview, Professor Fei-Fei Li mentions that machine learning is another less hyped name within the AI field, which represents most of the AI technologies we know today. Basically, machine learnings are mathematical models built by computers so that the program can interact and learn to make predictions. Other examples of tools in AI systems include deep learning, neural networks, computer vision, and natural language processing.

Knowing AI’s categories or types is helpful in terms of understanding the technique for non-technicians. AI can be classified according to the nature of the intelligence it demonstrates, encompassing cognitive, emotional, and social aspects. Thus, it can be analytical, reflecting reasoning ability, inspired by human intelligence, or humanized, incorporating characteristics that attempt to mimic emotional and social aspects (Russell & Norvig, 2022).

Regarding functionality or objectives (in traditional logic, this is called definition by extrinsic cause), on the one hand, they are defined in comparison to humans or an ideal rationality, and on the other, by reasoning or behavior. From these two dimensions, four possible combinations arise: acting and thinking humanly and thinking and acting rationally. The first two types of AI seek human-like intelligence and involve empirical and psychological studies on human behavior and thought processes. In the case of thought, the aim is to understand human thinking through introspection, psychological experiments, and brain observation. The premise is that once the theory of mind is sufficiently accurate, it will be possible to express it in a computer program. Fiction books and movies define AI based on humans and reasoning, reflecting systems that think like humans (Russell, S. & Norvig, P., 2022).

Additionally, AI can be categorized according to its stage of development: Narrow AI, with specific abilities for limited tasks; General AI, indicating broader intelligence; or Superintelligence, representing a level of intelligence that surpasses human capacity in all aspects. This taxonomy provides a comprehensive framework for understanding artificial intelligence’s different manifestations and evolutions (Russell, S. & Norvig, P., 2022). Today, we only have narrow AI systems (even those that seem more intelligent). As the quote from the 1970s mentioned by Fei-Fei, the most advanced computer AI algorithm will still make a good chess move when the room is on fire.

Within Narrow AI, we can classify two subcategories based on functionality: Reactive Machine AI (RM) and Limited Memory AI (LM). Reactive Machine AI (RM) consists of systems that have no memory. These systems are designed to perform specific tasks and operate only with the data available at the moment. They cannot remember previous results or decisions. Examples include IBM’s Deep Blue and Netflix’s movie recommendation system. Many machine learning and deep learning models also fall into this category.

On the other hand, Limited Memory AI (LM) can recall past events and outcomes. These systems monitor objects or situations over time, using past and present data to make decisions. However, they do not retain these data in a long-term experience library. As they are trained with more data, their performance improves. Examples of LM include virtual assistants like Siri and Alexa, as well as autonomous mechanical systems like drones and robots or autonomous vehicles.

In summary, I suggest understanding AI as a scientific field that is developing various technologies that, if viewed according to their objectives and functionalities, become much more accessible to comprehend. The best thing is to stop calling everything artificial intelligence, as Professor Michael I. Jordan highlights. Besides, not everything is AI, and the term suggests that systems are intelligent like us. Despite the inevitable comparisons and the use of human intelligence and rationality as parameters, as I showed above, models are still far from being like us. This confrontation, however, is already a topic for another post. For now, I will make a spin-off of this series explaining other useful terms for understanding AI in the proposal I presented today.

  1. In the book “The Cult of Information,” the author Theodore Roszak develops the idea of how people in the information economy were prone to excessive optimism in making surprising predictions for technological advances but always accompanied by inaccuracies and few real guarantees and foundations. Those interested can get an idea from this interview here. ↩︎

References:

Buchholz, L. (2024) 84% of employees are confused about what AI is – despite using it. UNLEASH. Accessible in: https://www.unleash.ai/artificial-intelligence/84-of-employees-are-confused-about-what-ai-is-despite-using-it/

Glover, Ellen (2024). Black Box AI. Builtin. Accessible in: https://builtin.com/articles/black-box-ai

Franklin, S. (2014). History, motivations, and core themes. In The Cambridge Handbook Of Artificial Intelligence (Eds. K. Frankish & W. Ramsey). Cambridge, United Kingdom: Cambridge University Press.

Kim, T. W., Maimone, F., Pattit, K., Sison, A. J., & Teehankee, B. (2021). Master and Slave: The Dialectic of Human-Artificial Intelligence Engagement. Humanistic Management Journal, 6(3), 355–371.

Lea, Kelly (2024). Students are still confused about AI. Wonkhe. Accessible in: https://wonkhe.com/blogs-sus/students-are-still-confused-about-ai/

Maslej, N., Fattorini, L., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T., … Perrault, R. (2023). The AI Index 2023 Annual Report. AI Index Steering Committee. Stanford.

Meira, S. (2024). Estamos na era da pedra lascada da IA, mas o futuro chega em 800 dias. Brazil Journal. Accessible in: https://braziljournal.com/silvio-meira-estamos-na-era-da-pedra-lascada-da-ia-mas-o-futuro-chega-em-800-dias/

Microsoft. (2024). Work Trend Index: Microsoft’s latest research on the ways we work. Accessible in: https://assets-c4akfrf5b4d3f4b7.z01.azurefd.net/assets/2024/05/2024_Work_Trend_Index_Annual_Report_663d45200a4ad.pdf?utm_source=The+Shift+Newsletter&utm_campaign=e16c74e660-EMAIL_CAMPAIGN_2024_05_10_10_45&utm_medium=email&utm_term=0_-e16c74e660-%5BLIST_EMAIL_ID%5D

MIT Sloan School of Management. (2021). Addressing AI Hallucinations and Bias. Accessible in:  https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/

Pretz, K. (2021, 31 de março). Stop Calling Everything AI, Machine-Learning Pioneer Says. IEEE Spectrum. Accessible in: https://spectrum.ieee.org/stop-calling-everything-ai-machinelearning-pioneer-says

Roszak, Theodore. (1988). O culto da informação. Editora Brasiliense.

Russell, S., & Norvig, P. (2022). Artificial Intelligence: A Modern Approach (4th-Glob ed.). Pearson Series in Artificial Intelligence, 19-78.

ThinkingAllowed. Theodore Roszak, 1933-2011 – Cult of Information (complete) – Thinking Allowed w/ Jeffrey Mishlove. YouTube,10/07/2011.  Accessible in: https://www.youtube.com/watch?v=Y4mzEvqsiuY

Young and Profiting. Stanford’s Fei-Fei Li: “The Godmother of AI” Unveiling Human-Centered Approach To AI . YouTube, 25/04/2024. Accessible in: https://www.youtube.com/watch?v=IePcaP5FY3Q

Back To Top