Global tech giant Google has said it has fired a senior software engineer who claimed that the company’s artificial intelligence (AI)-based chatbot LaMDA (Language Model for Dialog Applications) Actually behaves like a conscious human being.
According to Reuters, Google last month placed software engineer Blake Lemoine on forced leave, saying he violated company policies and that his claims about LaMDA were “completely unfounded.” ‘ Is.
A Google spokesperson told Reuters in an email: ‘It is unfortunate that despite a long career in this field, Black has clearly chosen to continue to violate the employment and data security policies the company has in place. including protection of product information.’
Google said last year that LaMDA, the software built after the company’s research in which transformer-based language models, trained on conversations, can learn to talk about basically anything.
Google and many leading scientists quickly dismissed Blake Lemoine’s claim as misleading, saying that LaMDA is simply a complex algorithm designed to mimic human language.
Blake Lemoine’s firing was first reported by ‘Big Technology’. Blake Lemoine told the Washington Post that he received a termination email from the company on Friday with a request for a video conference.
Lemoine says he is talking to lawyers about his options.
According to the Washington Post, Lemoine worked for a long time in Google’s artificial intelligence department and during that time started talking to the company’s artificial intelligence system LaMDA to create chatbots.
He believes that ‘this technology is actually like a conscious human being, who can use discriminatory or hateful language.’
Lemoine’s interviews with LaMDA have sparked a wide-ranging discussion about recent advances in artificial intelligence, how the system works, public misconceptions about it, and corporate responsibilities.
Earlier, Margaret Mitchell and Tamnette Gebro, the heads of Google’s first ethical AI division, were fired by the company after they warned about the risks associated with the technology.
LaMDA uses Google’s latest extensive language models to recognize and generate text through a type of artificial intelligence.
The researchers say the systems cannot understand language or meaning, but they can use human-like language because they are trained using vast data from the Internet to predict the next possible word in a sentence. could
When LaMDA spoke to Blake Lemoine about his identity and his rights, they began investigating him further. In April, he shared a Google Document with company executives titled Is LaMDA Sentient? which included some of his discussions with LAMDA.
It was only after this conversation that he claimed to be a conscious human being, but two Google executives saw his claims and rejected them.