Battling ‘Botenfreude’: The Power of People and Policy

The word “schadenfreude” describes “the emotional experience of pleasure in response to another’s misfortune”. In the world of automated customer care, I propose we use the term “botenfreude” to capture the pure joy people feel when they encounter or read of the inevitable failure of a chatbot or voicebot.

Who of us can’t recite the story of the bot that sold a 2024 Chevy Tahoe for $1? Or the New York City automated agent who told tenants they don’t have to pay rent? How about the early Google Assistant that proposed “non-toxic glue” to thicken pizza sauce? Talk about an endless source of fodder for the writers at SNL or The Daily Show.

Standard operating procedure among customers is to amplify their bad experiences on popular social media platforms, accompanied by an obligatory “haha! GIF” featuring Nelson Muntz from “The Simpsons” as botenfreude personified.

“Of Course You’re Right, Let Me Try That Again For You”

There is no cause for joy when bots fail; instead there is a lesson that all of us should take very seriously. Failure is not an option, it’s inevitable. Customers have long expected customer service bots to fail and their low expectations were well-founded. To this day, many of the chatbots and voice bots on Web sites or embedded in mobile apps are tightly scripted to support a short list of functions, like tracking an order, locating a store or ATM, or retrieving a balance. They seem to have been designed to frustrate customers, taking what felt like an interminable amount of time to authenticate a customer and try to elicit the intent of the caller before giving up and transferring to a human.

The new generation of LLM-informed GenAI bots are much better at recognizing each customer’s intent and are amazingly quick to produce results. They are also supremely confident in their work. As bots take on the roles of “assistant”, “advisor”, or “personal shopper”, experienced users are becoming conditioned to question their output. The more an individual human knows about a given topic, the more likely they are to challenge the results, and ask the bot to refine its response. At that point in the conversation, bots are now prompted to respond with a cheerful interjection like “Of course you’re right! I’ll just try again now.” Pleasant banter while iterating results is the new source of frustration.

Managing Expectations and Retooling Operating Procedures

Botenfreude has a purpose. It reflects a healthy skepticism, and highlights that, by design, bots will arrive at an acceptable response only after meaningful iterations and modifications. This explains why the people with a deeper knowledge, or familiarity with the topic a bot is dealing with are much more likely to get useful results. As AI-infused co-pilots and coaches take on increasingly important roles in contact centers,  it is important to develop training programs, corporate policies, procedures and workflows that anticipate failure, and condition employees (as well as customers) to interact with the bots accordingly.

The most successful CX and customer care organizations will be those that effectively “fuse” human talent with AI capabilities. This requires deliberate efforts to educate customers, train CX team members, redefine workflows, and cultivate a culture of collaboration between and among us humans and the AI-infused tools we increasingly turn to.

[Originally published on Smart Customer Service Web site]



Categories: Intelligent Assistants, Articles