Updated: No Difference Between Native and Captive: Apple To Leverage Both Siri and Nuance

With Microsoft plunking down $8.5 billion of its $36 billion in cash and near cash to buy Skype, a few analysts have started to take a closer look at the $25 billion in cash and short term investments on Apple’s balance sheet and apparently concluded that it is contemplating a deal with Nuance Communications. The discussion started at about the time that Google, Microsoft and Facebook were said to be in a three-way auction for Skype, whose S-1 filing (in preparation for an initial public offering of common stock) reflected a net loss in 2010 on revenues of about $850 million. MG Siegler wrote a piece in TechCrunch that described yet another three-way relationship. This time the dynamics involve Apple as parent of mobile assistance service provider, Siri (which Apple acquired almost exactly one year ago), and long-time customer/partner of Nuance, which sources both the speech processing capabilities that power Voice Control in iOS platforms and a number of downloadable apps to support dictation and predictive input of text-based content.

Yesterday Siegler published this storypinpointing Apple’s new data center in the hills of North Carolina as the locus where at least some of the servers will be running instantiations of both Siri and Nuance-based applications so that mobile and hybrid apps running on the new iOS can take optimize the interplay between speech-recognition or predictive texting, along with application logic and “artificial intelligence” to understand intent and deliver results. As Greg Sterling points out in this post on Internet2Go, Apple’s iOS-based experience has bit of catching up to do vis-a-vis Google’s Android-based devices.

As I noted in this post in August 2010, Google’s “Voice Actions” conditioned both users and application developers to expect spoken utterances to be one of the input modalities across all applications. A month later I described Google’s “home field advantage” when it introduced the many ways that a set of widgets could be used in Android that, in essence, made speech processing “native” to the operating system and therefore, of consistent use starting with the Home Screen and spanning all applications (like search) and utilities (like texting or dictation). Indeed, at Google I/0, Vlingo is showing off the latest version of its “Virtual Assistant” for Android-based phones. On a Samsung Galaxy 2, Vlingo is showing off speech-based access and control to a multiplicity of functions directly from the home page, but [contrary to what I may have implied here before] the Vlingo app connects directly to ASR resources and applications in Vlingo’s cloud.

Apple has been signaling its intent to meet and exceed Google’s speech-based offerings for a number of years now. In doing so, it has formed a broad (but not highly publicized) relationship with Nuance as provider of speech processing and predictive input for a broad spectrum of products and services. Followers of Siri know that roughly a month before its formal product launch in February 2010, it switched from its long-time speech recognition vendor to Nuance. At the time, it was thought to be avoiding a lawsuit.

From the mobile user’s point of view, there is no difference between Native and Captive. Google may have been first to market with speech-enabled services that smooth over the speed bumps between siloed applications. Apple, with an assist from “native” implementations of Nuance-based technology mated with “captive” Siri’s formidable combination of application logic and dynamic, e-commerce oriented data flows will try to meet and exceed Google’s efforts to provide the most pleasing user experience for goal-oriented mobile subscribers. The approach, which has been underway for more than a year now, obviates the need for Apple to spend billions of dollars to buy Nuance, but it will require a long-term relationship akin to the three-year joint development agreement between Nuance and IBM.



Categories: Articles

Tags: , , ,