Over the next week, many reports emerged detailing precisely how a bot that was supposed to mimic the language of a teenage girl became so vile. What the company had intended on being a fun experiment in “conversational understanding” had become their very own golem, spiraling out of control through the animating force of language. Twitter users started registering their outrage, and Microsoft had little choice but to suspend the account. Within 16 hours of her release, Tay had tweeted more than 95,000 times, and a troubling percentage of her messages were abusive and offensive. But after only a few hours, Tay started tweeting highly offensive things, such as: “I hate feminists and they should all die and burn in hell” or “Bush did 9/11 and Hitler would have done a better job…” At first, Tay engaged harmlessly with her growing number of followers with banter and lame jokes. On March 23, 2016, Microsoft released Tay to the public on Twitter. Eventually, her programmers hoped, Tay would sound just like the Internet. The plan was to release Tay online, then let the bot discover patterns of language through its interactions, which she would emulate in subsequent conversations. Using this technique, engineers at Microsoft trained Tay’s algorithm on a dataset of anonymized public data along with some pre-written material provided by professional comedians to give it a basic grasp of language. In any given data set, the algorithm will discern patterns and then “learn” how to approximate those patterns in its own behavior. Machine learning works by developing generalizations from large amounts of data. Tay was designed to learn more about language over time…. Eventually, her programmers hoped, Tay would sound just like the Internet.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |