[24]7 Makes Visual Speech a Cornerstone to Multimodal, Mobile Customer Care

24-7 logoCiting impressive results based on months in production at a credit card issuer that will handle 10 million calls this year, [24]7 has formally announced general availability of its Visual Speech solution. This is precisely the product that we expected to see from the company that merges long-standing development in predictive analytics (carried out by Voxify), investment in natural user interfaces for interactive self-service (by Tellme and Voxify) and context-aware, multi-channel customer care (from 24/7 Customer).

It has been a little more than a year since Microsoft announced the sale of its interactive self-service assets (aka Tellme) to the company that has become [24]7 Inc. The time has clearly been well-spent integrating both the intellectual property and personnel poised to help its enterprise customers pursue a coherent mobile customer care strategy. Its offering is really targeted to the fast growing community of smartphone users.

The canonical use case for the financial institution starts with an outbound “alert,” delivered  by a human-sounding automated voice. It is the classic alert that “suspicious activity has been detected for your account ending in xxxx.” Instead of suggesting that the person call a toll-free number or stay on the line and be transferred to an agent, the service asks the recipient if they would like to view a list of recent transactions on their phone’s browser.

If the customer says “yes,” she stays on the line and visually reviews the activity that the card issuer has found suspicious. If she sees no problem, her card is reactivated during the call. If a problem is detected, she can arrange for a new card to be issued and put in the mail without having to be transferred to a live agent. The service has been in production since November (2012)  and, based on experience, [24]7 is reporting:

  • 85% of smartphone users accepted multimodal invite
  • 92% success rate
  • 87% rated experience 4-5 stars

This bodes well for the approach.

[24]7 sees Visual Speech as a way to add years of life to existing an enterprise’s existing IVR assets because it provides a ready-made method for maintaining context in the course of a multi-modal, mobile “session.” There are similar use cases in other verticals. Airlines can provide travelers with a list of potential flights and fares and enable them indicate their preference. Ditto for hotels or even online retailers.

By making it easy for mobile subscribers to carry out commerce in ways that cross the boundaries between voice and visual input and output Visual IVR takes an important evolutionary step in conditioning the market for “more of the same.” There’s a lot going on in the background – in the way for Big Data and Analytics – to make the user experience so easy. Because the “pass-off points” are defined by individual users, rather than by IVR scripts or directed dialogs, it is helping to define the new generation of self-service and assisted self-service.

 



Categories: Articles

Tags: , , , ,

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.