Speech Enabled Mobile Services Start Removing Obstacles to Adoption

Over on Internet2go.com, Greg Sterling posted some thoughts about Google Voice Search. He observes that iPhone users are less than one-third as likely to use spoken words to initiate Google searches. Greg’s speculation is that “Google has more deeply integrated voice into the Android platform and may have trained people to use it more frequently accordingly.”

My take is a little different. Last month I wrote this post about Google’s “Home Field Advantage” as it provided a number of widgets to invoke its apps with the touch of a single button on the “home screen”. This is not about how “deeply integrated” voice is with the core platform or mobile operating system. It is, instead, a matter of limiting the number of clicks it takes to invoke speech enabled applications (or “Speechable Moments”, as I like to call them).

I’m not sure what the attrition rate phone users as they click through an icon on the some screen to invoke an application or feature. I remember that Web-based e-commerce experts used to say that you lose one-half of your customers each time they have to click on a link to get to a check-out. That was Amazon.com’s key motivation to launch “one-click check-out”.

The 3:1 ratio of speech enabled search on Droids compared with iPhones conforms to that rule. Users can’t invoke Voice Search unless and until they launch the Google App and then tilt the phone to its proper position. For Bing, voice search begins after opening an app and pressing a button. They don’t take a lot of effort, but they pose sufficient barriers to have a measurable effect. A number of speech enabled services (Greg mentions a few) are deeply integrated with the call-flows and API’s to support the a range of services, including search, text entry, Twitter and more. They would definitely get more use if they were easily or automatically invoked from the home screen or idle screen.

Speech apps work best when they appear to be the product of serendipity. I was recently given a new Plantronics M1100 “Savor” bluetooth headset to try. I was pleasantly surprised upon reception of the first incoming call. A synthesized voice (or stored recording) announced an “incoming call” and provided a prompt to the effect that I could say “answer” or “ignore” the event. I didn’t have to set up the feature or “train” the unit to recognize my instructions. Upon pressing the command button on the earpiece, the unit also calls “Bing 411” (Tellme’s Directory Assistance) in response to the command “Call information.”

By contrast, I’ve been told that the unit also supports Plantronics’ a number of other speech-enabled services, with the help of its proprietary, hosted service called “Vocalyst”. Registered users can configure the system to support a reminder service (based on Evernote); e-mail review, reply and origination (supporting Gmail, Yahoo! and AOL); Twitter; and review and origination of text messages (on Blackberry’s and Android-based phones). One of these days I will configure Vocalyst but, in the mean time, I’m very happy with the information I can get from Bing 411 and in the normal course of using my wireless phone.



Categories: Articles

Tags: , , ,

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.