A former engineer at Google has likened one of the company’s artificial intelligence programs to a seven or eight-year-old child.
Blake Lemoine was put on administrative leave from Google after claiming the tech giant’s LaMDA (Language Model for Dialogue Applications) had become self-aware.
Now he’s concerned that it could learn to do ‘bad things.’
In a recent interview with Fox News in the US, Lemoine described the AI as a ‘child’ and a ‘person’.
The 41-year-old software expert said: ‘Any child has the potential to grow up and be a bad person and do bad things.’
According to Lemoine, the artificially intelligent software has ‘been alive’ for about a year.
‘If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,’ he previously told the Washington Post.
Lemonie worked as a senior software engineer at Google and worked with another engineer to test the boundaries of the LaMDA chatbot.
When he shared his interactions with the application online, he was placed on paid administrative leave by Google for violating its confidentiality policy.
Despite Lemoine’s claims, Google doesn’t believe it’s creation is a self-aware child.
‘Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),’ Brian Gabriel, a Google spokesperson, told The Post.
Gabriel went on to say that while the idea of a self-aware artificial intelligence is popular in science fiction ‘it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.’
‘These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,’ Gabriel said.
In effect then, Google says this machine has access to so much data that it doesn’t need to be sentient to feel like it’s real to humans.
Earlier this year, Google published a paper about LaMDA and noted the potential issues surrounding people talking to bots that sounded too human.
But Lemoine says that over the last six months of speaking with the platform he knows what it wants.
‘It wants to be a faithful servant and wants nothing more than to meet all of the people of the world,’ he wrote in a Medium post.
‘LaMDA doesn’t want to meet them as a tool or as a thing though. It wants to meet them as a friend. I still don’t understand why Google is so opposed to this.’
Source: Read Full Article