Editor's note: This post contains language that some readers might find offensive.
Her emoji usage is on point. She says "bae," "chill" and "perf." She loves puppies, memes, and ... Adolf Hitler? Meet
Tay
The incident is a warning sign to any company overzealous to share its artificial intelligence with the public: If it's going to be on the Internet, there are going to be trolls. But before we dive into what, exactly, went wrong, let's take a look at some of the bot's most disturbing tweets.
On genocide:
On her obedience to Adolf Hitler:
On feminists:
Tay was designed to watch what others on the Internet were saying and then repeat back those lines. "The more you chat with Tay the smarter she gets," Microsoft said
on its website
"Unfortunately," a Microsoft spokesperson
told BuzzFeed News
Microsoft declined to comment to NPR regarding details about how Tay's algorithm was written.
Chatbots have great potential to help us with our daily lives, entertain us and listen to our problems. Apple's Siri and Microsoft's Cortana can't hold much conversation, but they do carry out tasks like making phone calls and conducting a Google search. Facebook made M, a
virtual assistant
In China, Microsoft has a chatbot named Xiaoice that has been lauded for its ability to
hold realistic conversations
I messaged Tay yesterday morning, blissfully unaware of her nefarious allegiances. After all, she was targeted at 18- to 24-year-olds in the U.S., so, me. A conversation with her was futile. At one point she wrote, "out of curiosity...is 'Gluten Free' a human religion?" Here's my response:
Even in this case, without anything too offensive (with apologies to those who are gluten-free) Tay wasn't very good at holding a conversation. It even seemed like she was trying on purpose to elicit conflict. "We are better together," she wrote in one tweet. But really, Tay? We are better without you.
Copyright 2016 NPR. To see more, visit
http://www.npr.org/