Kate Does a Lot of Xplainin’ at Genesys Xperience 2019

Xperience19 brought together the community over 2,400 customers, partners and analysts involved with Genesys PureCloud, PureConnect and PureEngage solutions. It was also a coming out party for Kate, the digital assistant that Genesys announced almost exactly two years prior as a “blended AI framework” to assist both employees and customers with an emphasis, from inception, on intelligence, kindness, strength and a sense of humor. (Because these sorts of attributes invite a certain amount of anthropomorphism, I will refer to Kate as “she” in this post).

All of those attributes were on display at Xperience19, where Kate was an integral part of the general release of the event’s mobile app, a compendium of information on the venue, sessions, speakers and attendees, along with support for generating personal calendars and messaging with other attendees. That included sending questions and requests to Kate, a service that 574 customers and partners, as well as 162 Genesys employees took full advantage of. According to activity logs provided to Opus Research by the Kate team, over the course of the three day event, attendees turned to Kate 1,054 times. Kate accurately recognized the intent of the contact 82% of the time, and referred questions to a live person 133 times. All told, Kate accurately handled 91% of the questions thrown her way, using a metric that Genesys calls “Experience Accuracy” – or those situations where Kate either knew the correct answer or knew that she didn’t have the right answer to provide to an end user.

In addition, Genesys used the Xperience19 app and social networking to show off Kate’s innate ability to support offline activities and commerce. Attendees were urged to “Ask Kate for a cup of coffee” and she was also able to suggest other options like having a Genesys rep deliver a T-shirt or pair of socks. These gestures were in the spirit of fun, but had the impact of making clear that Kate’s impact will be felt far afield of contact centers and “The Cloud” as individuals grow comfortable turning to automated digital assistants to support real world commerce. It is an idea that is being validated by the likes of McDonalds, where a voice-first automated intelligent assistant is taking orders from drive through customers.

Well-Trained but Adaptable

Genesys’ Kate Team spent considerable time in advance of the conference to train Kate to be able to answer anticipated queries. It started with a diet of data from 150 FAQs regarding the event and over 100 session descriptions. Then a group of internal beta testers opened the floodgate which led to over 2,800 interactions by the end of the three day event. By that time, Kate’s cognitive engine had grown to include more than 260 individual “intents” – the commonly used for “what individuals mean” or their purpose for contacting Kate. The common intents were easy to anticipate because they are the classic “frequently asked questions” (FAQs). All told, a third of the questions fielded by Kate fell into the classification of “event logistics” such as breakfast and lunch times, event transportation, keynote descriptions and the ever-popular “what’s the WiFi password?”

As in the real-world of customer care, Kate was destined to encounter unforeseen circumstances and unanticipated queries. That’s where the value of “blended AI” made itself apparent. On Day 3 of the event, roughly at lunch time, the Town of Aurora decided to cut off the resort’s water supply. There was a notable spike in the volume of queries to Kate at that very time, which she sought to resolve with accuracy, variety and a sense of humor. One of the team members, Kyley Eagleson, was Kate’s scriptwriter throughout the event. She’s the one that made sure Kate’s answers were accurate and engaging. One of her guidelines was to make sure that there were something like four different ways to provide common answers. They could be rotated to sound more human than a robotically repetitive – “yes”, “no”, “would you like me to bring in a live representative to help you with this?” or in the case of the lack of water, which meant delaying lunch and (worse) closing the rest rooms for an undetermined amount of time, Kate was able to say something like “I’m feel your pain. I’m hungry to.”

Validating the Multiple Engine Model

Xperience 2019 served as a microcosm that helped validate Kate’s technical and procedural underpinnings. First and foremost, it showed how Blended AI is bound to work in the real world. Kylie is not a coder. She is a subject matter expert with writing skills. Creating responses for Kate to use at scale was like publishing answers through a browser-like user interface (in this case Genesys Workspace). As for Kate’s own power to recognize intent and respond correctly, without human intervention, the answers came from Google AI for Contact Centers. However, Genesys’ Orchestration platform could be used to invoke responses from diverse knowledge bases and APIs. As Chris Connolly, VP of Product Marketing and head of the Kate team explained, ““It’s a myth to think that you’ll go to one engine for all the answers, just like you wouldn’t go to a single employee to get all the answers.”

Kate at Xperience2019 is a prototype of the sorts of applications and architectures that Genesys AI will support at scale for its enterprise customers. It is taking an approach that may have some skeptics as companies look to a single-platform for Intelligent Assistance, but it fulfills on the promise of keeping humans in the loop to supervise learning and ensure accuracy while providing a level of flexibility that avoids getting locked in to a single provider of so-called “Conversational AI”. Opus Research’s empirical observation is that most large companies are wisely keeping their options open as they evaluate the providers of natural language processing, machine learning and conversational analytics that best suits their specific requirements.



Categories: Intelligent Assistants, Articles