(Updated December 21) Use of voice- and natural language-based intelligent assistants has seen dramatic growth in the past 12 months as documented in a research report commissioned by MindMeld (formerly Expect Labs). Researchers interviewed 1,800 U.S. smartphone users over eighteen years of age in October 2015. They found that 63% of smartphone users have tried Siri, OK Google (Google Now), Cortana, Alexa (Amazon Echo) or another voice-based assistant. The research shows that, of those users, 60% began using within the past year, with roughly 42% having started within the past 6 months. Readers can obtain the report directly from MindMeld by providing digital coordinates through this link.
Release of the report coincides with the launch of the MindMeld 2.0 platform, the second generation of the company’s core product and service offering. It is an upgrade of the original MindMeld platform, introduced in 2014, which already powers voice and natural-language based self-service for cable operators, government agencies, automotive companies and content providers. The newly launched “2.0” platform is designed to provide natural language understanding at a larger scale, enabling question-and-answer style person-to-machine communications across a variety of devices and for a broad range of content domains.
Spotify, a leading music streaming service, is a showcase early adopter who is trialing MindMeld 2.0. By treating the metadata associated with Spotify’s catalog of music content, it expects to “create an intuitive and convenient voice experience to take music discovery and playback to the next level.” (quoting Lawrence Kennedy, senior product manager at Spotify.
Tim Tuttle, CEO and founder of MindMeld, explained why the timing is ripe for MindMeld 2.0 when he participated in a panel at the Intelligent Assistants Conference in New York last October. “I think that 2015 is officially the yeare that voice is no longer a novelty,” he asserted in this video (around 23rd minute). He followed by observing, “As of the second half of 2015, so many people are using voice on their smartphones that, if you don’t have it available in certain industries and applications, your customers will go to your competitors.” Indeed, Opus Research has seen the roster of IA implementations around the globe span telecommunications, travel and hospitality, retailing and banking and finance. Healthcare and eGovernment are next up.
Earlier in the video (at 9:10), Tuttle describes how dramatic leaps in both speech recognition and natural language understanding have fostered growth in acceptance of the natural UI for search and command functions, primarily through smartphones. He notes that technologies employed when his company (then called ExpectLabs) got started five years ago could only be applied to very narrow content domains, using very tight taxonomies. Use of personal intelligent assistants like Apple’s Siri, GoogleNow, Nuance’s Nina, Amazon’s Alexa or Microsoft’s Cortana on smart devices has greatly changed user expectations. So much so that 10% of search volumes through smartphones is now conducted using voice a striking increase over a period of 18 months.
The MindMeld 2.o platform includes NLU technology that shows “human-like accuracy” for customer content collection. I resist calling it “Big Data” but that’s what we’re talking about. Providing advanced Q&A capabilities is built on its ability to understand large vocabularies. Finally, the company promises “flexible deployment options,” both cloud- and premises-based in order to suit the needs of large enterprise customers.
There’s a single phrase that captures MindMeld’s intent with its new platform and capabilities and that is to “achieve scale” as growth in voice and natural language search and device control accelerates in the coming years.
Categories: Intelligent Assistants