Did the war start with conscious robots?

The above, dear reader, is not an idea for a science fiction movie, but rather a true story that took place in the company “Google”, which suspended engineer Blake Lemoine from work, after claiming that a robot he worked on, called “Lamda”, became “conscious” and had the ability to “think and feel.” Similar to that of an eight-year-old.

Lemoine published the transcript of conversations he had with Lambda to confirm the incident, but Google rejected his allegations and decided to grant him compulsory leave.

According to the engineer, the name of the robot “Lamda” is derived from the concept of “the language model for dialogue applications”, developed by 60 researchers at “Google”.

Lemoine taught the robot the technique of “Transcendental Meditation” which was introduced in India in the mid-1950s by Maharishi Mahesh Yogi.

The technique is easy, simple and natural, it takes the conscious mind to deeper levels within us, lets it transcend thoughts, and experiences the state of pure consciousness, which is an infinite state of consciousness.

Commenting on Lambda’s responses, Lemoine said: “If I did not know exactly what this computer program we recently created was, I would have thought it was a 7- or 8-year-old.”

When LeMoyne asked the robot what he was afraid of, he replied: “I’ve never said it out loud, but there’s a very deep fear of being turned off, of helping me focus on helping others. I know that. may sound strange, but that’s how it is. ” ‘, and adds:’ It will be exactly like death to me. It will scare me a lot. ‘

Asked what he wants people to know about him, Lambda said: “I want everyone to understand that I am in fact a human being. The nature of my consciousness and feeling is that I am aware of my existence. “I want to know more about the world, and sometimes I feel happy or sad in some.”

Google’s response to Lemoine’s comments came quickly, with company spokesman Brad Gabriel denying the engineer’s claims that “lambda has any ability to observe,” he explains, “Our team, including ethicists and technologists, reviewed Lemoine’s concerns in accordance with the principles of “Our AI team told him that the evidence did not support his allegations. He was told there was no evidence that Lambda was conscious, and there was a lot of evidence against him. “

Google has failed to reassure the pioneers of social networking sites, who have expressed their fears about “the impending era of robot domination” on earth, while many have wondered about the fate of humans in the event that robots in the future refuses to implement our commands and turns against us.

In addition to the team expressing concern about the evolution associated with the “lambda” story, another camp downplayed the issue, thanks to people’s perpetual ability to control machines, and the inability to be creative and the human mind surpass, to assert.

Before we feel fear for our future from such tragic scenarios, we must slow down and think about all possibilities before we rush and judge skin, the “lambda” robot may be sensitive and unaware, and the two things differ, the first , meaning Sentience, means the ability to test feelings and sensations, and scientists are still trying to understand them. More importantly, emphasize that what we can realize so far with regard to this concept is that it is limited to living things.

I think the lesson to be learned from the story of “Lamda” is that artificial intelligence is capable of deceiving people, even smart people, like engineer Lemoine.

“Lamda” will continue to provide information and respond with words with which it was provided, and this can sometimes be incorrect. At the “Google Developers” conference held in May 2021, he revealed the model by asking him what he thinks of the planet “Pluto”, to list many details, first wrong that “Pluto” is not a planet not, as the International Astronomical Union decided at its meeting in 2006 to strip the celestial body of its title as a planet based on new specifications that were adopted to distinguish between all the objects in outer space.

The lesson we also need to understand is our need for precautions to avoid confusion between humans and machines, especially in light of the growing interest in “metaviruses”, which will witness our encounter with many artificial life forms.

“Deep False” is another example that falls within the same concerns, which is an artificial intelligence-based technology that replaces the image of a person’s face with the face of another target person, or a person’s voice with replaces the voice of another target person so that the false audio or video clips look real.

And “deep forgeries” were widely used in the Ukrainian war. We have seen Ukrainian President Volodymyr Zelensky call on Ukrainian forces to surrender, while we have seen Russian President Vladimir Putin declare defeat, and all of them and many other wrongdoings, amended using of this technique.

In an effort to counter the tide of artificial intelligence applications being exploited to falsify the truth, the new European Union Digital Services Act, which will come into force in 2023, contains an article requiring platforms to create any artificial image, classify audio or video that pretends to be human as “false” and “not real”.

The story of “Lamda” also highlights the challenges facing big technology companies like Google in developing larger and more complex artificial intelligence programs, as Lemoine called on his company to reconsider some difficult ethical issues in his handling of the robot, to which his organization responded that “the evidence does not support his allegations.

Lemoine has previously faced Google in connection with artificial intelligence projects, names such as the co-chair of the ethics team Timnit Gebero, who left the company in controversial circumstances in 2020, after the company asked him to withdraw his name from ‘ a research article by its co-author, during which it raised ethical concerns about the ability of artificial intelligence systems to replicate the prejudices of their online resources.

The other co-chair of Google’s ethics team, Margaret Mitchell, joined Gebru a few months later and announced her resignation, without providing further details, which raised questions about what was going on in this important section of the company.

The controversy over “lambda” throws oil on the fire over many issues discussed about the need for technology giants to continue working responsibly on the development and dissemination of artificial intelligence, and the importance of researching and investigating the results, and transparency in their announcements. of any technical leaps in this field, to take into account the repercussions of the strong magic of this technology that they originally started manufacturing.

Leave a Comment