Home Technology Microsoft chatbot is taught to swear on Twitter

Microsoft chatbot is taught to swear on Twitter

4 min read
Comments Off on Microsoft chatbot is taught to swear on Twitter
0
15

Tay

A chatbot developed by Microsoft has gone rogue on Twitter, swearing and making racist remarks and inflammatory political statements.

The experimental AI, which learns from conversations, was designed to interact with 18-24-year-olds.

Just 24 hours after artificial intelligence Tay was unleashed, Microsoft appeared to be editing some of its more inflammatory comments.

The software firm said it was “making some adjustments”.

“The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay,” the firm said in a statement.

Tweet screengrabImage copyrightTwitter
Image captionSome of Tay’s tweets seems somewhat inflammatory

Tay, created by Microsoft’s Technology and Research and Bing teams, learnt to communicate via vast amounts of anonymised public data. It also worked with a group of humans that included improvisational comedians.

Its official account @TayandYOu described it as “Microsoft’s AI fam from the internet that’s got zero chill”.

Twitter users were invited to interact with Tay via the Twitter address @tayandyou. Other social media users could add her as a contact on Kik or GroupMe.

“Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,” Microsoft said.

“The more you chat with Tay the smarter she gets, so the experience can be more personalised for you.”

This has led to some unfortunate consequences with Tay being “taught” to tweet like a Nazi sympathiser, racist and supporter of genocide, among other things.

Those who attempted to engage in serious conversation with the chatbot also found limitations to the technology, pointing out that she didn’t seem interested in popular music or television.

Others speculated on what its rapid descent into inappropriate chat said for the future of AI.

Tweet screengrabImage copyrightTwitter

After hours of unfettered tweeting from Tay, Microsoft appeared to be less chilled than its teenage AI.

Followers questioned why some of her tweets appeared to be being edited, prompting one to launch a #justicefortay campaign, asking the software giant to let the AI “learn for herself”.

Tweet screengrab
[Source:- BBC]
Load More Related Articles
Load More By Rajdeep
Load More In Technology
Comments are closed.

Check Also

Exclusive: Refusing to touch pupils is a form of child abuse, psychologists say

Teachers who avoid physical contact with pupils could be causing them harm, according to m…