Blake Lemoine, a software engineer on Google’s artificial intelligence development team, has gone public with allegations of encountering “sentient” AI on the company’s computers, after being suspended for exposing private information about the project to third parties.
The researcher was placed on paid leave by Alphabet Inc. early last week, allegedly for violating the company’s confidentiality agreement, he said in a Medium post headlined “May be fired soon for doing AI ethical work.”
In his blog post, he draws a parallel to other members of Google’s AI ethics committee, such as Margaret Mitchell, who were fired in a similar manner after raising concerns.
Engineer claims that Google AI was a person
Moreover, in an interview published on Saturday by the Washington Post, Lemoine claimed he came to the conclusion that the Google AI he spoke with was a person, “in his capacity as a priest, not a scientist.”
The AI in issue is known as LaMDA or Language Model for Dialogue Applications, and it’s used to create chatbots that interact with humans by assuming different personality traits. When Lemoine highlighted the issue internally, senior officials at the corporation refused his attempts to undertake studies to verify it.
In response, Google spokesperson Brian Gabriel said, “Some in the larger AI community are discussing the long-term prospect of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.”