New technology, old dilemmas: how artificial intelligence is bringing up classic philosophical arguments

In 1956, at the Dartmouth Workshop, John MacCarthy coined the term artificial intelligence (AI) along with other prominent authors: Marvin Minsky, Herbert Simon, and Allen Newell. The origin of AI benefited from the intersection of two significant intellectual developments of the time, the cognitive revolution and the theory of computability, and brought from the imagination to practice the old dream of creating automata. In this environment of optimism, Herbert Simon predicted that human-like thinking machines would be a reality about to happen.

Indeed, a few years later, in 1959, Arthur Samuel’s chess-playing program showed that machines could learn. After that came machine learning, neural networks, cognitive computing, and robotics. However, progress is much slower than expected as we turned the century and artificial intelligence still bears little resemblance to human intelligence.

Interestingly, at the 50th-anniversary celebration of the seminal conference in Dartmouth, Jim Moor questioned whether AI thinking at the human level would be possible in the next 50 years. Five participants from the original meeting were there, but optimism was not consensus this time around. MacCarthy and Minsky stated yes, while the others were cryptic or negative.

Still, artificial intelligence affects virtually everyone who uses modern technology, including crucial decisions such as those made by self-driving cars, military devices, robots in medicine, and algorithms defining promotions, financial operations, and who is and is not eligible for a real estate loan.

The problem is that centuries of study have not been enough to unravel all the nuances of our intelligence, let alone replicate it. We have a technology that is still in development, but we are entering the second age of machines in which our mental capacity is being replaced (namely, in the first, it was muscle power). Silently, algorithms decide our lives, and the way we interact, and flourish is also being influenced.

Since its inception, AI has proved to be much more than a field of computing because to create machines that think human-like, scientists had to return to old discussions about what intelligence is, passing through subjects such as rationality, intellect, thinking, cognition, and decision making. Furthermore, the whole discussion of ethics considered machines as instruments. Now that machines make decisions, should we consider them moral subjects or moral agents? Are our ethical theories enough for a reality of coexistence between AI and humans, or do we need new models? Will machines ever have intelligence comparable to ours?

The questions are endless, but the certainty is that AI is becoming increasingly powerful, and the search for answers through trial and error can have disastrous consequences.

References:

Brynjolfsson, E. & McAfee, A. (2016), The second machine age: Work, progress, and prosperity in a time of brilliant technologies. New York: W. W. Norton.

Frankish, Keith (ed.), The Cambridge Handbook of Artificial Intelligence (pp. 316-334). Cambridge University Press.

Martin, K.(2019). Ethical Implications and Accountability of Algorithms. Journal Business Ethics 160,835–850. doi:10.1007/s10551-018-3921-3.

Back To Top