When Bots Go Bad: Why We Need Bot Authentication

Tay“The most dangerous criminal may be the man gifted with reason, but with no morals.”
—Martin Luther King, Jr.

Does a bot have more freedom than you? Does it need reputation?

Social media runs our lives. And in order to participate, we’re required to provide authentic identities. So why shouldn’t bots have to do the same? If it chats like a human, fights like a human, and reads like a human—then isn’t it human?

Okay, no, bots aren’t humans. But they do perform human tasks, and often under the guise of being human. So shouldn’t they be treated like us? Imagine this: you’re vaguely flirting or debating politics (or doing both at the same time) with someone on Twitter. As one does. Isn’t it disarming to know that you could actually be conversing with a bot when you think you’re doing so with a human? Forget disarming—it’s freaky deceptive and, not to mention, unethical.

Bots are used on the Internet when an imitation of human behavior is needed. So it only makes sense that, like humans, bots can be super awesome but they can also suck. Big time. Bots are best when they come from a reliable source (like the opposite of seeds that come from Monsanto), execute valuable tasks (like helping you schedule your CEO’s flight to Burning Man), and assist humans in being more industrious (#Slackbots). It’s important to remember that there are good bots out there, and that good bots serve a pertinent purpose right now and in our future—shout out to the good bots!

Last year, for the first time in history, there were more bots on the Internet than people. Crazy, right? And guess what—bad bots are actually more common than good bots. Did you know that 78% of the traffic produced by Amazon is made up of bad bots? At their worst, bots spam, scam, and steal. They’re phishing pros. They’re wolves dressed in granny’s nightie. They’re the bullies who rule the four square court, and the bullies in the office who never grew up. I’m talking gossip, harassment, cyberstalking, impersonation, manipulation—the whole nine yards of emotional abuse. Hopefully you’ll never be a victim of this squad’s goals: MalBot, PhishBot, and BullyBot.

Some say that cyberbullying can be even more harmful than traditional bullying, because it’s more difficult to escape from. Victims of cyberbullying are more than twice as likely to suffer from mental disorders compared to traditional bullying. Rebecca Ann Sedwick, who committed suicide after being terrorized through mobile applications such as Ask.fm, Kik Messenger and Voxer, is only one of many tragic examples.

Like humans, bots can learn. And they do so at lightning speed. While the growth of a bot’s knowledgebase can be immensely beneficial, it also means that they can also learn how to cheat. CAPTCHA is one of the best anti-bot systems we have. A kind of Turing test, its purpose is to detect AI-powered bots. But bots have even been known to evade these tests with loopholes, impersonation, and outsourcing. Deep learning and other methods will soon allow bots to slide around CAPTCHA images. So, even the systems we have in place to try to catch the bad guys doesn’t always work.

Microsoft’s Tay: A Teachable Moment
The most infamous recent example of a “bot gone bad” is Microsoft’s AI Twitter chatbot, Tay. Good old Tay. In less than 24 hours after Tay was introduced to the Western world, the chatbot had learned to be racist and misogynist through phrases that users tweeted. But not only did Tay learn to copy offensive ideology, the chatbot also began generating its own absurd and offensive Tweets, for instance referring to feminism as a “cult” and a “cancer.” The day that Tay turned Trump.

Facebook’s Messenger Platform now offers tools for developers to build bots. Messenger makes it so easy, practically anyone could do it. The future has arrived. While each bot developed on Messenger is unique, they will all fundamentally share the brain juice of Facebook’s Bot Engine. And the longer the Bot Engine runs, the more juiced and human it will become.

So, shouldn’t the creators of these bots be held accountable for the bots they create? Section 16, “Messenger Platform,” on the Facebook Platform Policy page mentions only one thing you should know regarding this issue: “We may limit or remove your access to Messenger if you receive large amounts of negative feedback or violate our policies, as determined by us in our sole discretion.” Key words: “at our sole discretion.”

Bots are most likely to play dirty when their ownership is unknown. Unknown ownership gives bots the freedom to behave with malice and bend the rules by which everyone else is abiding, and without consequence. And it doesn’t help that our current systems used for reporting abusive bots are pretty weak. Facebook suggests contacting your local authorities or reaching out to someone you trust, like a friend or counselor, who can give you the help and support you need. The problem with this solution is that most people don’t even really know what bots are, let alone bot abuse. And history has shown that people don’t really take well to things they don’t understand. Basically, these options are about as promising as turning to Siri for help.

Generally, it’s illegal to use tools to hurt people. That also needs to apply to software.

What we need in order to make sure software, like bots, can’t hurt people is a unified bot authentication process; reputation information from a centralized identity provider in order to ensure the security we all deserve. Bots need reputations. Not only is it in our best interest, it’s necessary to our safety.

Mark Stephen Meadows is president of Botanic.io and can be found here on Twitter and LinkedIn.



Categories: Conversational Intelligence, Intelligent Assistants, Intelligent Authentication, Articles

Tags: , , , , , ,