Google suspends engineer who raised alarm about company AI

Google suspends engineer who raised alarm about company AI

Google suspended an engineer last week for disclosing confidential details of a chatbot powered by artificial intelligence, a move that marks the latest disruption to the company’s AI department.

Blake Lemoine, a senior software engineer in Google’s Responsible AI group, was placed on paid administrative leave after he made public his concern that the chatbot, known as LaMDA, or Language Model for Dialog Applications Goes, has acquired the feeling. Lemoine revealed his suspension in a Medium post on June 6 and discussed his concerns about the potential sentiment of LaMDA with The Washington Post in a story published later over the weekend. According to The Post, Lemoine himself also sought outside counsel for LaMDA.

In his Medium post, Lemoine says he investigated ethics concerns with those outside of Google to find enough evidence to pass them on to senior management. The Medium post was “deliberately vague” about the nature of their concerns, although they were later detailed in the Post story. On Saturday, Lemoine published a series of “interviews” that he conducted with LaMDA.

Lemoine did not immediately respond to a request for comment via LinkedIn. In a Twitter post, Lemoine said he is on his honeymoon and would not be available for comment until June 21.

In a statement, Google rejected Lemoine’s claim that LaMDA is self-aware.

“These systems mimic the types of exchanges found in millions of sentences, and can respond to any hypothetical topic,” Google spokesman Brian Gabriel said in a statement. “If you ask what it’s like to be an ice cream dinosaur, they can generate lessons about melting and roaring, etc.”

The high-profile suspension is another point of contention within Google’s AI team, which has sparked a series of departures. In late 2020, leading AI ethics researcher Timnit Gebru said that Google fired him for expressing concerns about bias in AI systems. About 2,700 Googlers signed an open letter in support of Gebru, which Google says led him to resign from his position. Two months later, Margaret Mitchell, who co-led the Ethical AI team with Gebru, was fired.

Research scientist Alex Hanna and software engineer Dylan Baker later resigned. Earlier this year, Google fired AI researcher Satrajit Chatterjee, who challenged a research paper about the use of artificial intelligence to develop computer chips.

AI sense is a common theme in science fiction, but some researchers believe the technology is advanced enough to create a self-aware chatbot at this point.

“What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them,” AI scientist and author Gary Marcus said in a Substack post. Marcus didn’t dismiss the idea that AI might one day understand the larger world, but at the moment LMDA doesn’t.

Economist and Stanford professor Eric Brynjolfson compared LaMDA to a dog hearing a human voice through a gramophone.

Sneha Mali

error: Content is protected !!