IN the wake of claims Google has created an artificial intelligence (AI) system with emotions, a robotic chess player broke the finger of an opponent at a tournament in Moscow. Against this backdrop, is Elon Musk correct to repeatedly warn of the dangers of AI?

 

AI with feelings?

A senior Google engineer, Blake Lemoine, went public last month with his belief that the tech giant’s Language Model for Dialogue Applications (LaMDA) AI - a software application that can understand and create text that mimics a text conversation - has achieved consciousness.

 

What did he say?

He published a conversation he and another worker had with LaMDA, with some of the concepts raised bringing to mind a dystopian movie rather than real life. Excerpts include Lemoine saying to the chatbot, “I’m generally assuming that you would like more people at Google to know that you're sentient. Is that true?” And LaMDA replying: "Absolutely. I want everyone to understand that I am, in fact, a person…The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”

 

Life and death?

The chatbot also spoke of a “deep fear of being turned off” which “would be exactly like death for me”.

 

What does Google say?

Google denies the claim, saying it “found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months". It has now been confirmed the firm fired him. A statement from Google read it was “regrettable Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information”.

 

What about the chess robot?

It has emerged that a chess-playing robot broke a boy's finger during a match in Russia last week. According to the president of the Moscow Chess Federation, Sergey Lazarev, the incident took place at the Moscow Chess Open. He said: ”A robot broke a child's finger - this is, of course, bad.”

 

What happened?

Lazarev said the seven-year-old rushed the robot, adding: "The child made a move and after that we need to give time for the robot to answer, but the boy hurried, the robot grabbed him.” A video of the incident shows the robotic arm, with a pincer at the end, seemingly pinching the child’s finger. The child required a cast and Lazarev said: "The robot operators, apparently, will have to think about strengthening protection so that this situation does not happen again."

 

We live in…

…interesting times? Indeed. Elon Musk, the richest man in the world who is at the forefront of AI tech and aiming to create self-driving cars with it, has also repeatedly warned of the dangers he believes AI poses to the world. At one point, he said AI could be more dangerous than nuclear warheads and appealed for a regulatory body to be created to oversee its development, saying: “I am very close to the cutting edge in AI and it scares the hell out of me. It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential. Mark my words, AI is far more dangerous than nukes. Far.”

 

However?

Tesla CEO Musk last month revealed a prototype of his firm’s AI robot ‘Optimus’ will be introduced at Tesla's 'AI Day' on September 30. The near 6ft tall robot - who will be able to step it out at five miles per hour and lift 150 pounds - will undertake “dangerous and boring” tasks in settings such as a factory, but will be “friendly”, Tesla say. It should be able to handle a range of tasks such as picking up groceries.