BlenderBot 3 (Opens in a new window) is a new AI developed by Meta that is said to be able to maintain a conversation with just about anyone online without turning into a jerk.

According to Meta says (Opens in a new window) in a blog post announcing the new chatbot, “BlenderBot 3 is designed to develop its conversational abilities and safety through feedback from people who talk with it,” focusing on helpful input while avoiding learning from problematic or risky responses.

“Unhelpful or harmful responses” is putting it mildly. We noted in 2016 that less than 24 hours after its launch, Microsoft had to take down the Tay Twitter bot because it “turned from a happy-go-lucky, human-loving conversation bot to a full-on racist.”

With BlenderBot 3, Meta hopes to prevent those issues. The business explains:

We’ve undertaken extensive research, co-hosted seminars, and developed novel ways to establish protections for BlenderBot 3 because conversational AI chatbots are known to occasionally copy and originate hazardous, biased, or insulting remarks. BlenderBot can still say things that are inappropriate or harsh in spite of our efforts, thus we are requesting suggestions to improve chatbots in the future.

In addition, before beginning a conversation with BlenderBot 3, prospective testers must acknowledge that they “understand this bot is for research and entertainment only, and that it is likely to make untrue or offensive statements,” and “agree not to intentionally trigger the bot to make offensive statements.”

Of course, this hasn’t stopped testers from asking BlenderBot 3 what it thinks (Opens in a new window) or about US politics (Opens in a new window) of Meta CEO Mark Zuckerberg. However, in my experience, the bot’s capacity to “learn” from discussions makes it challenging to duplicate its response to a specific query.

EDITORS’ RECOMMENDATION

“We discovered that BlenderBot 3 significantly outperformed its predecessors on jobs requiring dialogue, Meta explains. Additionally, it has twice the information while making factual errors 47% less frequently. Additionally, we discovered that only 0.16% of BlenderBots’ replies to humans were labeled as impolite or improper.”

The FAQ post on the chatbot’s website (Opens in a new window) and a blog post (Opens in a new window) from Meta’s dedicated AI team both have further information regarding BlenderBot 3. According to The Verge (Opens in a new window) , the firm hasn’t specified how long this public experiment, which is presently only available in the US, will be conducted.

GET THE BEST NEWS FROM US! For daily delivery of our best stories to your inbox, sign up for What’s New Now.

Advertisements, discounts, and affiliate links could be found in this newsletter. You agree to our Terms of Use and Privacy Policy by subscribing to a newsletter. You are always free to unsubscribe from the newsletters.

SHARE
TWEET

You may also like