Whore chatbot

11-Feb-2015 13:50 by 6 Comments

Whore chatbot

Now granted, most of the above stories state or imply that Microsoft should have realized this would happen and could have taken steps to safeguard against Tay from learning to say offensive things.(Example: the Atlanta Journal-Constitution noted that “[a]s surprising as it may sound, the company didn’t have the foresight to keep Tay from learning inappropriate responses.”).

It seems that when AIs learn from trolls to be bad, people have at least some tendency to blame the trolls for trolling rather than the designers for failing to make the AI troll-proof.

Now, in the case of Tay, the question of “who’s to blame” probably does not matter all that much from a legal perspective.

I highly doubt that Zoe Quinn and Ricky Gervais (who Tay said “learned totalitarianism from adolf hitler, the inventor of atheism”) will bring defamation suits based on tweets sent by a pseudo-adolescent chatbot.

But what will happen when AI systems that have more important functions than sending juvenile tweets “learn” to do bad stuff from the humans they encounter?

By far the most entertaining AI news of the past week was the rise and rapid fall of Microsoft’s teen-girl-imitation Twitter chatbot, Tay, whose Twitter tagline described her as “Microsoft’s AI fam* from the internet that’s got zero chill.” offensive stuff. Basically, Tay was designed to develop its conversational skills by using machine learning, most notably by analyzing and incorporating the language of tweets sent to her by human social media users.

Like calling Zoe Quinn a “stupid whore.” And saying that the Holocaust was “made up.” And saying that black people (she used a far more offensive term) should be put in concentration camps. What Microsoft apparently did not anticipate is that Twitter trolls would intentionally try to get Tay to say offensive or otherwise inappropriate things.

How could a chatbot go full Goebbels within a day of being switched on?

At first, Tay simply repeated the inappropriate things that the trolls said to her.

But before too long, Tay had “learned” to say inappropriate things without a human goading her to do so.

This was all but inevitable given that, as Tay’s tagline suggests, Microsoft designed her to have no chill.

Now, anyone who is familiar with the social media cyberworld should not be surprised that this happened–of a chatbot designed with “zero chill” would learn to be racist and inappropriate because the Twitterverse is filled with people who say racist and inappropriate things.

But fascinatingly, the media has overwhelmingly focused on the people who interacted with Tay rather than on the people who designed Tay when examining why the Degradation of Tay happened.