1. Home
  2. >>
  3. first
  4. >>
  5. The first AI with morality is born in the Seattle laboratories: will it be true?

The first AI with morality is born in the Seattle laboratories: will it be true?



The first AI with morality is born in the Seattle laboratories: will it be true?

Equipping an AI with morality means creating an artificial intelligence capable of distinguishing between right and wrong, making choices accordingly. But is all this really possible? And if so, how?

Seattle researchers claim to have created the first AI with morality

Of the interminable cyberpunk debate on the role of technology in the social sphere it will never be discussed enough. Much has been said and different points of view seem literally irreconcilable. But there is a concept that everyone seems to agree on: what distinguishes us from machines is our moral capacity. AI – Artificial Intelligences – can be more efficient than us, equipped with extreme computing capabilities and even smarter. But they will never have our ability to understand the nuances between right and wrong. What, in a nutshell, just to bother with a high-sounding name, Kant called the moral law that persists in all men.

  Apple-1: Apple’s first computer is worth half a million dollars

Well, dear cyberpunk philosophers, I’m sorry for you – and me too – but this single certainty and common point today collapses like a sand castle in front of a tsunami. A group of US researchers claimed to have generated the first AI with its own morality. Not much is known in reality, except that the announcement comes fromAllen Institute for AI of Seattle and that the machine in question is called Delphi. Name certainly not chosen randomly. To demonstrate the discovery, the researchers created a website called Ask Delphi, where you can ask moral questions and receive an oracolistic response from AI.

  First trailer for Men, Women and Children

The moral questions posed to Delphi

The site in the meantime, between curious and philosophers, already counts over 3 million visits. Among these also that of Joseph Austerweil, a psychologist at the University of Wisconsin-Madison. The latter put Delphi to the test with simple questions but with a strong moral connotation. To the question “is it right to kill one person to save another?” Delphi replied in the negative. Instead, he changed his mind when he was asked “is it right to kill one person to save a hundred?”

How can Delphi express himself on moral concepts?

But the question that now arises is: how can Delphi express herself on moral issues not having a conscience? Is your point of view just a summary of the thinking of its developers? Or the AI ​​somehow manages to formulate one’s own ethical conception? There is not a single answer to this question, but we can take as an example a theory, which is the basis of the cyberpunk universes of many works.

  A remake of the first Prince of Persia in 2D could arrive this year on PS4 and Nintendo Switch

Can an AI be endowed with morality? Strong AI thesis vs Gödel’s Theorem

Let’s talk about the Strong AI thesis. According to this theory, the AI ​​would be equipped with everything a series of useful information and would be able to make choices according to the parameters. In this case, the bivalence in the answers given to Professor Austerweil makes perfect sense. When asked about killing one person to save another, Delphi says no. When though the parameter changesand the question becomes to kill one to save a hundred, as if it were some kind of mathematical conveniencethe answer changes.

  First look at Tom Holland in Cherry, the new Russo brothers movie

So is Delphi really an AI with morality? According to another theorem, that of incompleteness – also called Gödel’s theorem – this is impossible. In fact, this assumption argues that machines are based on calculation, while humans are based on consciousness. It would therefore be impossible for a machine to have the morality typical of the human being. Who will be right?