10 ethical issues involving artificial intelligence (part 1)

In early 2021, renowned software developer Serokell, based in Tallinn, Estonia, asked some Artificial Intelligence experts what kept them up at night in relation to the product of their work, and what were their fears about the fruit of their research.

The answers could be summarized in 10 main points.

In this short text, I bring the first 3 discussions, the next ones will come in the near future

What if machines take the place of men in the workspace?

Researchers worry that machines are doing more and more human technical work, and they’re doing it better.

Machines don’t rest, they have an unlimited learning curve, and they don’t have prejudices and idiosyncrasies. It is not necessary to deal with moods or maintain certain niceties.

In addition, they are easily replaceable and, with some maintenance, can last much longer than the lifetime of a worker.

The question is? when machines do all the work, what will be left for humans.

Personally, I don’t believe this is a problem, the history of economics and technology has always been marked by this, more efficient machines taking the place of men in tedious and exhausting manual processes.

The market is dynamic, machines have always taken up spaces and men have always managed to relocate themselves to new workspaces that were created, overwhelmingly, better than the ones they had previously.

This process allowed the multiplication of wealth and human flourishing in other ways. Also, even though an AI might look a lot like a human being, for logical and philosophical reasons, it doesn’t have sentience. A machine can learn to interpret and repeat human emotions, to the point of reproducing works that are born of them, but it will not be able to produce the new.

The only real problem, in my view, is: If this advance happens too fast and there is not enough time for the market to adapt, how to deal with a mass of unemployed?

Who is responsible for the errors of an AI?

When software, as we know it today, makes a mistake, it is very easy to find the person responsible and take the necessary measures. Just search for the name of the developers.

After all, common software is programmed to do something specific, interpret and repeat its programming codes automatically or upon some command.

They will do nothing more and nothing less than their developers determine.

But an AI is dynamic, it expands its own codes by itself and, from the moment it has contact with the Network and its immensity of information, it will look for the best ways to protect itself, replicate itself and improve itself. Learning new techniques and procedures that have nothing to do with your original programming.

That said, imagine that an AI, which has already expanded enough to look nothing like its original programming, commits something obscene, atrocious.

Is it possible to blame its creator?

How to distribute the new wealth?

As mentioned in point 1, the history of economics and technology points to the fact that machines take the place of men because they have the ability to do the same work better and cheaper.

This has many consequences, and one of the most obvious is more accumulated wealth.

There is already a certain discussion in the computer world if it is correct that the developers of certain programs (like the ones we know today) permanently maintain the right to receive all the financial wealth generated by their creation since it is the nature of information technology that software be improved, improved and popularized through interventions and modifications proposed and carried out by the user community.

This type of discussion gave rise to open source and free software, even at the beginning of the digital age.

The discussion becomes even deeper when we talk about artificial intelligence. Remember that an AI can connect to the network and absorb information from all sides to improve itself, improve its processes and do its work better, thus generating more economic results.

In this way, it will be in contact with a great number of information, which it will make use of, which was neither created, nor organized, nor discovered by its creator, but by third parties who are having the fruit of their work used to improve the performance of this AI

Is it fair for the creator to keep all the resulting economic results, without owing anything to the producers of the content used in the improvement of AI?

Back To Top