Microsoft Pulls Tay Following Hate Speech Tweets

Microsoft Pulls Tay Following Hate Speech Tweets

In case you missed it; earlier this week Microsoft let its Twitter chatbot Tay into the world—only for things to take a bad turn and end in the Tay taking time off from Twitter.

Things didn’t start out terrible for Tay. The chatbot—geared towards the 18-24 year old demographic—was actually having friendly interactions with users albeit with generic responses that were often repeated. Given the offensive content later repeated by Tay, fingers have been pointed at users of 4chan’s /pol/ feeding lines to the chatbot.

Although this was a social experiment to see how Tay would perform—and eventually what the company would need to fix—Microsoft apologized to users who may have been offended by the offensive comments.

Peter Lee, corporate VP of Microsoft Research, didn’t touch on who was responsible or how they managed to exploit Tay, but did say that the there was a great deal of testing—especially for potential abuse—but this kind of exploit was either missed in testing or Microsoft simply didn’t count on the worst on the internet coming out to play.

Of course the latter doesn’t seem likely because it’s 2016 and it’s the internet.

RELATED
Microsoft Edge Enters Extension Testing With Recent Build Release
Microsoft to Focus On Getting Research Ideas to Product Stage


Starting with Kabir News in 2013, James has focused on tech, gaming, and entertainment. When not writing, he enjoys catching up on sci-fi and horror shows and comics. He can be followed on Twitter @MetalSwift.

Leave a Comment