AI chatbot had to be told like humans, Google fired software engineer

Search engine Google-owned Alphabet on Friday said it has sacked a senior software engineer who claimed that artificial intelligence (AI) chatbots resemble humans.

AI chatbot had to be told like humans, Google fired software engineer

The world's largest search engine Google on Friday fired a senior software engineer.  Google-owned Alphabet says it has fired a software engineer who described the company's artificial intelligence (AI) chatbot LaMDA as human.  The company said of the sacked software engineer Blake Lemoine that he violated company policies and that the company found his claims on LaMDA "totally baseless".  According to Google, the Language Model for Dialog Applications (LaMDA) has been prepared based on the company's research.

 What is LaMDA after all?

 The full name of LaMDA is Language Model for Dialog Applications.  It is a chatbot based on Google's very advanced language model.  This chatbot uses artificial intelligence to interact with the users.  Chatbot can use trillions of words from the Internet for this.  It is built on a large collection of data or text crawled from the Internet.

 Blake Lemoine broke policy

 Google fired Blake Lemoine last month.  A Google spokesperson clarified the company's stand on the matter in an email sent to Reuters.  The email lamented that Blake clearly chose to continue to violate employment and data security policies, despite being involved for a long time on the subject.  This also included protecting the information related to the product.

 Scientists refute Blake's claims

 According to reports, Blake Lemoine had claimed that the human brain could be behind the conversation of the chatbot.  It was Lemoine who leaked the conversation with the LaMDA-based chatbot.  At the same time, many prominent scientists, including Google, dismissed Lemoine's claims as misleading.  According to him, LaMDA is just a complex algorithm designed to explain human language.