Facebook says its Blender chatbot 'feels more human (Details)
Facebook says its Blender chatbot 'feels more human (Details)

Facebook says its Blender chatbot ‘feels more human (Details)

Facebook says its Blender chatbot ‘feels more human (Details).

SIRI, ALEXA, OR Google Assistant can set a timer, play a song, or check the weather with ease, but for a real conversation you may as well try talking to the toaster.

Speaking as naturally as a person requires common-sense understanding of the world, knowledge of facts and current events, and the ability to read another person’s feelings and character. It’s no wonder machines aren’t all that talkative.

A chatbot developed by artificial intelligence researchers at Facebook shows that combining a huge amount of training data with a little artificial empathy, personality, and general knowledge can go some way toward fostering the illusion of good chitchat.

The new chatbot, dubbed Blender, combines and builds on recent advances in AI and language from Facebook and others. It hints at the potential for voice assistants and auto-complete algorithms to become more garrulous and engaging—as well as a worrying moment when social media bots and AI trickery are more difficult to spot.

“Blender seems really good,” says Shikib Mehri, a PhD student at Carnegie Mellon University focused on conversational AI systems who reviewed some of the chatbot’s conversations.

Snippets shared by Facebook show the bot chatting amiably with people online about everything from Game of Thrones to vegetarianism to what it’s like to raise a child with autism. These examples are cherry picked; but in experiments, people judged transcripts of the chatbot’s conversations to be more engaging than those of other bots, and sometimes as engaging as conversations between two humans.

Blender still gets tripped up by tricky questions and complex language and it struggles to hold the thread of a discussion for long. That’s partly because it generates responses using statistical pattern matching rather than common sense or emotional understanding.

Other efforts to develop a more contextual understanding of language have shown recent progress, thanks to new methods for training machine-learning programs. Last year, the company OpenAI trained an algorithm to generate reams of often convincing text from a prompt. Microsoft later showed that a similar approach could be applied to dialog; it released DialoGPT, an AI program trained on 147 million conversations from Reddit. In January, Google revealed a chatbot called Meena that uses a similar approach to converse in a more naturally human way.

Facebook’s Blender goes beyond these efforts. It’s based on even more training data, also from Reddit, supplemented with training on other data sets, one that captures empathetic conversation, another tuned to different personalities, and a third that includes general knowledge. The finished chatbot blends together the learning from each of these data sets.

“Scale is not enough,” says Emily Dinan, a research engineer at Facebook who helped create Blender. “You have to make sure you’re fine tuning to give your model the appropriate conversational skills like empathy, personality, and knowledge.”

The quest for conversational programs dates to the early days of AI. In a famous thought experiment, computer science pioneer Alan Turing set a goal for machine intelligence of fooling someone into thinking they are talking to a person. There is also a long history of chatbots fooling people. In 1966, Joseph Weizenbaum, a professor at MIT, developed ELIZA, a therapist chatbot that simply reformulated statements as questions. He was surprised to find volunteers thought the bot sufficiently real to divulge personal information.

More sophisticated language programs can, of course, also have a darker side. OpenAI declined to publicly release its text-generating program, fearing it would be used to churn out fake news. More advanced chatbots could similarly be used to make more convincing fake social media accounts, or to automate phishing campaigns.

The Facebook researchers considered not releasing Blender but decided the benefits outweighed the risks. Among other things, they believe other researchers can use it to develop countermeasures. Despite the advances, they say the program remains quite limited.




  • About News

    Web articles – via partners/network co-ordinators. This website and its contents are the exclusive property of ANGA Media Corporation . We appreciate your feedback and respond to every request. Please fill in the form or send us email to: [email protected]

    Check Also

    Brian Laundrie news: 'We're not wasting our time,' police commander says

    Brian Laundrie news: ‘We’re not wasting our time,’ police commander says

    VENICE, Fla. – Six days into the search for Brian Laundrie, police in North Port …

    Leave a Reply