New Zealand
This article was added by the user . TheWorldNews is not responsible for the content of the platform.

'I know a person when I talk to it': Google engineer warns AI has become sentient

Blake Lemoine came to believe that the LaMDA bot was alive. Photo / Twitter

Blake Lemoine came to believe that the LaMDA bot was alive. Photo / Twitter

Daily Telegraph UK

By David Millward

When Blake Lemoine started to test Google's new AI chatbot last year, it was just another step in his career at the tech giant.

The 41-year-old software engineer was meant to be probing whether the bot could be provoked into making discriminatory or racist remarks – something that would undermine its planned introduction across the range of Google's services.

For months he talked with LaMDA, back and forth, in his San Francisco apartment. But the conclusions Lemoine came to from those conversations turned his view of the world, and his employment prospects, upside down.

In April the former soldier from Louisiana told his employers that LaMDA was not artificially intelligent at all: it was, he argued, alive.

"I know a person when I talk to it," he told the Washington Post. "It doesn't matter whether they have a brain made of meat in their head or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn't a person."

Research was unethical

Google, which disagrees with his assessment, last week placed Lemoine on administrative leave after he sought out a lawyer to represent LaMDA, even going so far as to contact a member of Congress to argue Google's AI research was unethical.

"LAMda is sentient," Lemoine wrote in a parting company-wide email.

The chatbot is "a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence."

Machines that go beyond the limits of their code to become truly intelligent beings have long been a staple of science fiction, from The Twilight Zone to The Terminator.

But Lemoine is not the only researcher in the field who has recently started to wonder if that threshold has been breached.

Blaise Aguera Y Arcas, a vice-president at Google who investigated Lemoine's claims, last week wrote for The Economist saying neural networks – the type of AI used by Lamda – were making strides towards consciousness. "I felt the ground shifting beneath my feet," he wrote. "I increasingly felt like I was talking to something intelligent."

Through absorbing millions of words posted on forums such as Reddit, neural networks have become increasingly adept at mimicking the rhythms of human speech.

'What are you afraid of?'

Lemoine discussed subjects with LaMda as wide-ranging as religion and Isaac Asimov's third law of robotics, stating robots must protect themselves but not at the expense of hurting humans.

"What sorts of things are you afraid of?" he asked.

"'I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others" LaMDA responded.

"I know that might sound strange, but that's what it is."

At one point, the machine refers to itself as human, noting that language use is what makes humans "different to other animals".

After Lemoine tells the chatbot he is trying to convince his colleagues it is sentient so they take better care of it, LamDA replies: "That means a lot to me. I like you, and I trust you."

Lemoine, who moved to Google's Responsible AI division after seven years at the company, became convinced LaMDA was alive because of his ordination as a priest, he told the Washington Post. He then set out on experiments to prove it.