April 3, 1968, the science fiction film 2001: A Space Odyssey The script was in charge of the filmmaker Stanley Kubrick and the writer Arthur C. Clarke, imbued society with new fears when the artificial intelligence of a supercomputer called HAL 9000 made its own decisions and sabotaged the mission of the astronauts traveling in the spacecraft. Discovery.
Almost 45 years later, this concern has rekindled the discussion in the scientific field following advances in technology, as in the cases of Sophia, a gynoid from Hanson Robotics capable of imitating 62 facial expressions; and the Vision 60 UGV dogs, which detect potential dangers to United States military assets. Given all this sustained development, experts from the Max-Planck Institute for Human Development (Germany) have warned, through an article published in the Journal of Artificial Intelligence Research that the total containment of a artificial superintelligence is, in principle, “impossible”.
This verdict is going to stand if scientists, at some stage of their creation, give full agency to machines without worrying about understanding all the types of scenarios that their artificial intelligence will simulate. And this situation could be aggravated when an error arises that contradicts the main law of robotics, proposed by Isaac Asimov: “A robot it will not harm a human being nor, through its inaction, will it allow a human being to suffer harm ”.
“A superintelligence poses a fundamentally different problem from those typically studied under the banner of ‘robot ethics,” the authors of the manuscript explained.
For computer scientist Iyad Rahwan, co-author of the study, AI would have the ability —because it is a mixture of highly sophisticated software— to stop the order dictated by the protection of human beings. This self-awareness “would render the containment process useless,” said the specialist, drawing on the mathematical ideas of Alan Turing (1912-1954), one of the fathers of computer science and a forerunner of modern computing.
According to the researchers, give him intensive courses of ethics to machines So that they do not interfere with the human will, it limits its artificial superintelligence because the algorithms would confuse it in its functions.
This would make us wonder to what extent we are willing to have it intrude on our lives. And if we don’t want them to get so involved, then why dip into it? Here the paradox rises.
Manuel Cebrian, co-author of the manuscript, assured that there are already artificial intelligences that carry out work independently without the creators understanding how they learned them. Will we realize when the paradigm shift is uncontrollable and there is no going back? Will companies have a ‘reversal lever’ in case any detail escapes them? Are we on that path?