Although Interactions Corporation has long been reticent to say which speech processing technology gives voice to its ever-helpful Virtual Assistants, those days are officially over. In this press release, Interactions lets it be known that the AT&T Watson(sm) portfolio of speech processing, natural language understanding and text-to-speech rendering are foundational to its highly human-sounding self-service resources.
Individuals who call Hyatt Hotels, Humana, LifeLock or the dozen or so other Interactions clients may already be subtly aware of the benefits of human-like automated self-service. Voice self-service systems powered by Interactions “learn” from past interactions in order to improve customer experience. As a result Interactions-based services are constantly evolving and re-defining what blend of automated and human-assisted activities are optimal – both for callers and for brands. AT&T, with its long pedigree in voice processing and call processing will find Interactions to be the ideal collaborator in defining and refining implementations that make the human (or human-like) voice a vital part of a natural user interface.
Both AT&T and Interactions are stepping up their game while trying to improve the quality of conversational commerce. Last week AT&T announced refinements to its set of Speech APIs (based on Watson), tuning its understanding to be more suitable for electronic games and for interacting over social networks. For its part, Interactions convened a Webinar to demonstrate how its virtual assistants can make it possible for large insurance programs – like Medicare – to automate their enrollment processes.
As AT&T notes, it has been in the business of developing natural sounding TTS and human language understanding resources for decades. We can expect to see more licensing agreements or partnerships with innovative firms like Interactions to develop innovative solutions, which makes this weeks licensing agreement notable. It is rivaled by the long-term licensing agreement between IBM and Nuance to add human-like speech recognition and TTS to mobile devices and selected verticals like healthcare. You see the pattern, right?
Thanks to these developments, I am greatly looking forward to the Keynote Panel at this year’s SpeechTek where I will share the stage with Apple’s CTO Vlad Sejnoha and Mazin Gilbert, assistant vice president of technical research at AT&T Labs-Research, where Watson is managed.
Categories: Coverage Areas