Is artificial intelligence finally realized, or is it just smart enough to make people believe that they have become more conscious?
Google engineerBlake Lemoine recently claimed that the company's AI technology was perceptual,whether AI will be realized in the fields of technology, ethics, and philosophy. There is a lot of debate about when it will happen. As a deeper question about the meaning of being alive.
Lemoine spent months testing Google's chatbot generator, known as LaMDA (short for Dialogue Applications language model), and as LaMDA said, it's unique. I became convinced that I was living my life. About its needs, ideas, fears and rights.
Google dismissed Lemoin's view that LaMDA became sensual and put him on paid leave earlier this month — his claim published by theWashington Post.days ago
Most experts believe that LaMDA and other AIs are unlikely to be close to consciousness, but technology could reach them in the future. Not excluding sex
"In my view, [Lemoine] was captured by illusions," said cognitive scientist and author ofRebootingAI. Gary Marcus told CBC'sFront Burner.Podcast.
"Our brain is not built to understand the difference between a computer that is forging intelligence and a computer that is actually intelligent. Computers that are forging intelligence are better than they really are. May also look like a human. "
Computer scientists explain that LaMDA, though much larger, behaves like the auto-complete feature of a smartphone. increase. Like other large language models, LaMDA has a large amount of text to identify patterns and predict what might come next in a sequence, such as a conversation with a human. Trained with data.
"When a cell phone autocompletes text, it suddenly realizes what it means to be alive with itself. You wouldn't expect it to be. Carl Zimmer, a science columnist at the New York Times and author ofLife's Edge: The Search for What It Means to Be Alive, said:
Remoin , also appointed as a mysterious Christian priest, is a Wired LaMDA "human". He said he was convinced of his position. 159} How to talk about that need because of the level of self-awareness and Fear of death if Google removes it
Some scientists As suggested, he claims not to be fooled by clever robots. Lemoine maintained his position and even seemed to suggest that Google had enslaved the AI system .
"Each person is free to personally understand what the word" person "means and how that word relates to the meaning of terms such as" slavery ". You will be able to do it, "he wrote in. I will post to Medium on Wednesday.
Marcus, Remoin is a long human who falls to , which computer scientists call "ELIZA effect". I believe it is the latest in the line. , ”Named after the 1960s computer program , which chatted in the therapist style. A simple response like "Tell me more about it" convinced the user that they were having a real conversation.
"It was 1965, and here we are in 2022, and it's kind of the same thing," Marcus said.
Scientists who spoke with CBC News pointed out the human desire to anthropomorphize objects and creatures. We are aware of human-like features that are not really there.
"If you look at a house with strange cracks and windows and it looks like a smile, you're like," Oh, the house is happy. " We do this kind of thing as usual. " Carina Vold, an assistant professor at the Institute of Science, Technology, History and Philosophy at the University of Toronto, said.
"I think it's this kind of anthropomorphism that often happens in these cases. In such anthropomorphism, it sounds like sensibility, saying" I'm sensibility. " There is a system to say words. This is really easy to figure out, "she said.
Humans are beginning to considerthe legal rights that AI should have, including whether they deserve the right to personality.
"We will soon enter the realm where people believe these systems are worthy of their rights. In fact, they do what they think they are doing internally. It doesn't matter if you are there or not. It's a very powerful move. " Kate Darling, a robot ethics expert at the Massachusetts Institute of Technology Media Lab, said.
Definition of Consciousness
AI is so good at telling us what we want to hear, so how can humans determine if it really happened? Is it?
It is a subject of debate in itself. Experts haven't come up with a test of AI awareness yet — or reach consensus on the meaning of awareness.
If you ask philosophers, they will probably talk about "phenomenal consciousness," the subjective experience of being you.
"Whenever you are awake. It feels a certain way. You are experiencing some kind of experience ... when I kick a rock down the street, what am I? I don't think I feel it either.] I want to be that rock. "
For now, AI looks like that rock. It is hard to imagine that a physical voice can have positive or negative emotions, as philosophers believe it is necessary for "sensitivity".
Perhaps you can't program your consciousness at all, Zimmer says.
"Theoretically, consciousness can come from certain physical and evolved types of matter. [Computer] is probably outside the edge of life."
Others believe that humans can never really be sure if AI has developed consciousness. And it doesn't make much sense to try.
"Consciousness can range from the pain of stepping on a stud to the appearance of a bright green field red. This is because the computer is conscious in that sense, so forget about it. I recommend it, "said Stephen Pinker, a cognitive scientist at Harvard University.
"Anyway, we need to set higher goals than replicating human intelligence. We need to build a device that does what we need to do."
According to Pinker, these include dangerous and boring professions and jobs around the house, from cleaning to childcare. It will be.
Rethinking the role of AI
Despite the great advances in AI over the last decade, technology still has common sense, another important element that defines humans. It's missing.
"[Computer scientists] don't think consciousness is a waste of time, but they don't think it's central," said Professor Emeritus of Computer Science at the University of Toronto. Hector Lebesk, who is, says.
"What we think is central is somehow to be able to use normal common sense knowledge in machines, that is, a 10 year old child knows. It's like you'd expect it to be. ""
Levesque gives an example of a self-driving car. Staying in the lane and stopping at a red light will help the driver avoid a collision, but will sit down when faced with a road closure. I'm not doing anything.
"That's where common sense comes in. [It] has to think about why I'm driving in the first place. Are you trying to go to a particular place? "" Levesque said.
Humanity is waiting for AI to learn more street smarts. One day, walk your life — scientists want the debate about consciousness and rights to go beyond technology to other species that are known to think and feel for themselves.
"If you think consciousness is important, it's probably building some kind of system that leads a miserable or suffering life in some way that we don't realize. I'm worried about it, " Vold said.
"If it really motivates us, we ponder other species of our natural system and we may be causing them. I think we need to see what suffering. There is no reason to prioritize AI. Other species we know have a much stronger case of being conscious. "