In case you missed it; earlier this week Microsoft let its Twitter chatbot Tay into the world—only for things to take a bad turn and end in the Tay taking time off from Twitter.
Things didn’t start out terrible for Tay. The chatbot—geared towards the 18-24 year old demographic—was actually having friendly interactions with users albeit with generic responses that were often repeated. Given the offensive content later repeated by Tay, fingers have been pointed at users of 4chan’s /pol/ feeding lines to the chatbot.
Although this was a social experiment to see how Tay would perform—and eventually what the company would need to fix—Microsoft apologized to users who may have been offended by the offensive comments.
Peter Lee, corporate VP of Microsoft Research, didn’t touch on who was responsible or how they managed to exploit Tay, but did say that the there was a great deal of testing—especially for potential abuse—but this kind of exploit was either missed in testing or Microsoft simply didn’t count on the worst on the internet coming out to play.
Of course the latter doesn’t seem likely because it’s 2016 and it’s the internet.