Posted on | December 6, 2007 | 1 Comment
Sometimes truth is stranger than fiction. According to The Associated Press, Microsoft disabled its artificial ‘Santa’ chat engine after the program suggested to two underage girls that “It’s fun to talk about oral sex“. Thankfully, the program also declared “but I want to chat about something else.“
Any AI program can be tricked into saying (nearly) anything that doesn’t set off a ‘don’t ever say’ filter. The problem is, the girls were not talking about oral sex with the Microsoft pseudo Santa via instant messenger, the girls were repeatedly trying to get Santa to eat pizza. This sparked ‘Santa’ to say “You want me to eat what?!?“, followed by the soon to be famous blunder.
The uncle of the girls (a technology writer) replicated their experience easily, though his discourse with MS Clause ended in both parties calling each other a “dirty bastard“.
Microsoft has since taken the chat engine off-line, while they debug Santa’s vocabulary. Microsoft indicated that this incident was due to the girls “pushing this thing to make it do things it wasn’t supposed to do“, adding “It’s not like if you say, ‘Hello Santa,’ he’s going to throw inappropriate stuff at you“.
This is yet another example of Microsoft not understanding the needs of their users. What did they expect children to do? They girls weren’t pushing anything, they were being children who were fascinated with having Santa on their buddy list. Note that the AP story reports that an adult replicated the experiences of his nieces, this was not the result of an end user trying to make a robot say dirty things.
I’m really, really curious to know how the text that children send to “Santa” is processed and used. This is another prime example of why I don’t use their products, and neither does my daughter. Sure, I suppose anyone could find this blog post and read the same things that the chat bot said, however this is a blog, not an AI engine designed specifically to interact with children after earning parental trust
I think, what irritates me the most is Microsoft blaming the kids for the behavior of the bot. If they had just said ‘whoops’, apologized and fixed it, this incident might be more funny than alarming. Anyone can make mistakes, but please, own up to them.
Here’s the link to read the full scoop, in case you missed it. You’ll probably see this on the front page of Slashdot, soon.