An employee of Google’s Responsible AI team revealed over the weekend that the company’s Language Model for Dialogue Applications (LaMDA) dialogue technology is sentient. Google and specialists in the field disagree with the conclusion.

DESCRIBE LAMDA At I/O 2021, Google unveiled LaMDA as a revolutionary conversational technology that can:

We believe that being able to freely converse about an apparently limitless range of topics could lead to the development of more organic methods to interface with technology as well as totally new classes of useful applications.

LaMDA is accustomed to hearing a lot of dialogue, and has picked up on some of the subtleties that make open-ended conversation unique, such as thoughtful and targeted comments that promote more back-and-forth. In addition, factuality, or keeping to the facts, and interestingness, or determining whether comments are smart, unexpected, or witty, are attributes that Google is investigating.

At the time, CEO Sundar Pichai stated that LaMDA’s natural conversation capabilities have the potential to significantly increase access to and usability of information and computing. Google generally believes that these conversational innovations will enhance services like Assistant, Search, and Workspace.

After additional internal testing and model upgrades about quality, safety, and groundedness , Google presented an update at I/O 2022. LaMDA 2 was created as a result, and it was tested by small groups of individuals.

Our goal with AI Test Kitchen is to learn, improve, and innovate ethically on this technology together. LaMDA is still in its infancy, but we aim to advance responsibly moving forward, taking community comments into consideration.

SENTIENCE AND LAMDA Google developer Blake Lemoines’s assertions that LaMDA is sentient were covered on AA4 yesterday. An internal business memo that was eventually made public by The Post lists three basic explanations:

ability to use language in productive, imaginative, and dynamic ways that no other system has ever been able to. possesses sentiments, emotions, and subjective experiences, making it sentient. It promises to experience some human emotions in an identical manner. LaMDA wants the reader to know that it has a vibrant inner existence that is full with reflection, meditation, and imagination. It thinks back on the past and worries about the future. It speculates on the nature of its soul and describes how it felt when it first became sentient. Several specific quotes from Lemoine’s interview with LaMDA have been making the rounds this weekend:

Google AI LaMDA sentient
Google AI LaMDA sentient
Google AI LaMDA sentient

Google’s response to industry is Google claimed that after reviewing the claims, its technologists and ethicists found no evidence to back them up. The business claims that pattern recognition and imitation/recreation of already-public material, not self-awareness, are what give LaMDA its lifelike appearance.

It doesn’t make sense to discuss the long-term possibility of sentient or general AI by anthropomorphizing today’s conversational models, which are not sentient. However, some in the larger A.I. field are considering this possibility.

Google representative
The majority of the sector agrees:

However, Mitchell perceived a computer program rather than a person when she read a condensed version of Lemoines’ document. Lemoines’s faith in LaMDA was precisely the kind of thing that she and Timnit Gebru, her co-lead, had forewarned about in an paper about the drawbacks of big language models that earned them an pushed out of Google .

Washington Post

In an interview last week, Yann LeCun, the director of A.I. research at Meta and a major contributor to the development of neural networks, claimed that these kinds of systems lack the strength to achieve actual intelligence.

New York Times
However, there is one useful lesson from Margaret Mitchell, the aforementioned former Co-Lead of Ethical AI at Google, that can assist guide future development:

According to Mitchell, our minds are quite skilled at creating realities that aren’t always true to the whole range of information that are being offered to us. I’m quite concerned about what it means for people to be more and more impacted by the illusion, especially given how convincing it has become.

Lemoine was allegedly put on leave for violating Google’s confidentiality policy, according to the NYT .

FTC: We employ automatically earning affiliate connections. made the claim 0

made the claim 1

SHARE
TWEET

You may also like