Fear for the human race from the intelligence of machines

Will catastrophic Hollywood scenarios become a reality in the future? Or will artificial intelligence still be confined to the service of humanity?


In March 2014, at one of the largest technology exhibitions in Hamburg, Germany, the monkey (Charlie), a monkey-like robot, was unveiled where Charlie can walk on its four limbs, allowing him to navigate rugged terrain such as those on the moon, and can also be a step forward in destroying humanity!


Google uses this test for the smart browsing and search it offers on its site. Video games also use it to create worlds of self-sustaining electronic games. It is also used by electronic stores to broadcast music tones and films made according to the tastes of customers. In early 2014, a Russian electronic program managed to convince scientists that he is a 13-year-old boy named Eugene.


In fact, we are still decades away from being able to develop a supercomputer that can be psychologically ill so that it has that hostile personality that can enslave the human race, yet AI experts are already working to avoid the worse, when machines become smarter than humans.


These concerns prompted artificial intelligence experts around the world to sign an open letter launched at the beginning of this year through the Future of Life Institute, in which they pledge that development in the field of artificial intelligence will be carried out safely and with great caution to ensure that development in this field does not spiral out of human control. Among the most prominent signatories of this letter are the founders of Deep In addition to MIT professors and experts from some of the largest tech companies, such as IBM’s Watson Supercomputer Team and Microsoft’s Research Division.


This message comes after many experts issued warnings about the dangers of ultra-intelligent machines. Two years ago, the representative of the United Nations called for a ban on the production, use or even testing of so-called self-control weapons that can identify targets and carry out an attack on them without human intervention. There have also been many ethical criticisms of other uses of intelligent machines. For example, if a self-driving vehicle needs to be deviated to avoid a collision, would it be morally acceptable for the machine to decide whether to sacrifice the lives of pedestrians or cyclists on the way to save the lives of those in the vehicle or No?


Similarly, renowned physicist Stephen Hawking, entrepreneur, inventor, and CEO of Tesla Motors (Elon Musk) expressed concerns about allowing AI to wreak havoc on Earth: “One can imagine that such technologies could intelligently dominate financial markets, outperform human researchers in their inventions, be able to address issues in isolation from “Where the short-term impact of AI depends on who controls it, its long-term impact will depend on whether we have control over it in the first place,” he added, meaning that the smart machine is completely out of human control.


Professor Hawking told B. B. C: “Successfully developing a complete AI could lead to the annihilation of the human race.”


This came in response to a question asked about updating the technology he uses to communicate with others, which contains a primitive form of artificial intelligence.


Professor Hawking says the primitive forms of artificial intelligence developed so far have proven useful, but he fears the consequences of developing a technology equivalent to or superior to human intelligence.


“If it goes, it could redesign itself with accelerated paces,” he said. Humans governed by a slow biological evolution process will not be able to compete with this technology that will outperform them.”


“We have to be very careful with artificial intelligence, it could be more dangerous than nuclear weapons,” Elon Musk tweeted in August 2014. “I am increasingly inclined that there should be some regulatory oversight at the national and international level, just to make sure we don’t do something very foolish.”


In December 2014, in a speech at a scientific conference: in which he warned humans against (calling demons) “We must act very cautiously with regard to artificial intelligence. We may be facing the greatest existential threat to us. We call demons through artificial intelligence, and some think he can control the devil after he is called. But that won’t work.” Regarding the development of artificial intelligence capabilities, despite Musk’s well-known technological initiatives, through which he openly announces his intention to move humans towards new dimensions in space science and batteries.


At the conference during which he spoke, he called for the imposition of local and international laws to prevent humans from carrying out (clumsy behaviors) regarding smart technologies.


Theories assume that the greater a computer’s AI potential, the greater its ability to learn, and the more it can learn, the more intelligent it is. In 2013, Vicarious developed an artificial intelligence program that can pass a widely used online test, designed to address humans and the computer separately, requires the test known as “capcha”, which is the initials of the phrase (the general Turing test that is fully automated to talk to the computer and humans separately), and the test requires humans to rewrite a short set of secret numbers or letters.


Scott Phoenix, co-founder of Vicarious, was quoted by The Wall Street Journal as saying he wants to go further and create computers that can learn how to cure diseases, produce renewable energy, and do most of the jobs done by humans. The goal is to create “a computer that thinks like a human, except that it doesn’t have to eat or sleep,” the newspaper quoted him as saying.


This year, scientists founded the University of Cambridge, where Hawking is director of research (the Center for the Study of Risks to Human Existence), whose goals include studying how to maximize the benefits of human beings from artificial intelligence and avoid disaster similar to what we see in science fiction novels.


But both goals are still far from being achieved: philosopher and author Nick Bostrom surveyed a group of AI experts on when they trust science will achieve a “high level of machine intelligence.” These scientists expressed their belief that this will be achieved on average in 2075, and thirty years later super-intelligent machines can be invented, which can outperform human thinking, but 21% of them said this will not be achieved at all.


It is worth mentioning that the Future of Life Institute is a volunteer-only research organization, founded by dozens of mathematicians and computer science experts around the world, whose main goal is to mitigate the potential dangers of artificial intelligence at the level of human intelligence, which can develop frighteningly. Its long-term plan is to stop treating the danger of artificial intelligence as a fantasy,

Related Articles

One Comment

Leave a Reply

Your email address will not be published.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker