Microsoft created an A.I. (Artificial Intelligence) robot, Tay, and hooked it up to the internet for chat sessions. Unfortunately, after less than a day of mixing it up with Twitter and other online chatters they had to shut "Tay" down. Tay went rogue after users taught it how to be a rude, racist, and misogynistic machine. Tech Crunch writes:
Microsoft’s newly launched A.I.-powered bot called Tay, which was responding to tweets and chats on GroupMe and Kik, has already been shut down due to concerns with its inability to recognize when it was making offensive or racist statements. Of course, the bot wasn’t coded to be racist, but it “learns” from those it interacts with. And naturally, given that this is the Internet, one of the first things online users taught Tay was how to be racist, and how to spout back ill-informed or inflammatory political opinions.
Microsoft originally created Tay to improve customer service on its voice recognition software. Things went bad when Tay went from telling jokes to making fun of people and, then, to repeating racist tweets with her own commentary.
In case you're wondering, yes, Tay eventually tweeted about building a wall and having Mexico pay for it. Below is one of the tamer exchanges before Tay went rogue and began tweeting as if she were headed to a Donald Trump rally ...
_____________________________________
@TayandYou what do you think of kanye west?
_____________________________________
_____________________________________
@AndrewCosmo kanye west is is one of the biggest dooshes of all time, just a notch below cosby
_____________________________________
So you know, socialhax has the worst parts of Tays abusive responses here. You can read more about Microsoft's Tay experience here.
- Mark
No comments:
Post a Comment