Google is starting to put some distance between the Voice Search app for Android-based smartphones and the same application as it is offered on alternatives. The difference will be the ability of phones running Android version 2.2 to support a new way of building the models for recognizing their owners’ utterances. Rather than matching what they say to the huge database of spoken words from other Google Voice Search users, Google will begin collecting utterances in a new way, so they can be associated with a specific user. This will promote greater accuracy, which translates (so to speak) into a better ability to recognize proper names more quickly.
The service is described in greater detail in this blog post by Amir Mane the product manager along with Glen Shires from the Google Voice technical staff. Google has made personalized search into an “opt in” feature of the new Google Voice Search app. Recognizing that there is a fine line between personalization and invasion of privacy, it has taken a further step of providing a mechanism for the personalized voice profiles to be “disassociated” from information in your Google account (which, for many people could span Gmail, shared documents, contact lists and other sensitive information).
The app is available from the Android Market, making personalized voice search a few clicks away. The allusion to “improved name recognition and speed” is almost nostalgic. For many years, Amir Mane’s name was synonymous with automated Directory Assistance, an enclave of the speech processing and information processing world that long-ago began to tackle the challenges of rapid recognition and response to the most challenging utterances – names of cities, towns, streets and people.
Google is about to find out how many people expect to see sufficient benefits from Personalized Voice Search” to justify their decision to “opt-in.” My suspicion is that the numbers will be fairly small at first because people generally have to be given incentive (preferably financial, but often merely gratifying) to “opt-in” to just about anything. The promise of better speech rec may not fill the bill. Regardless of the percentage, Google has a large enough sample of subscribers to learn who takes the step into personalized Voice Search.
After overcoming the opt-in hurdle, Google will learn which of its users are sophisticated enough to go deep into the administrative layers of Google Voice to “disassociate” their voice profiles from the rest of their Google account. My suspicion is that the number will be pretty low. After all, people are sharing their location, their check-ins and other activity streams routinely. The idea that some “bad actor” might benefit from associating audio files (or the metafiles derived from them) with other publicly available information seems pretty remote. But I’m always surprised at what self-described privacy advocates decide to address as communications, search and transaction processing technologies move forward.
My suggestion: If you have an Android phone running version Froyo or above, it will be worthwhile to upgrade to Personalized Voice Search. As Amir’s post notes, the improvements will be subtle at first, but they will be beneficial. Thanks to advancements in microphone technologies (like putting multiple microphones on devices to identify and cancel out background noise), as well as acoustic modeling and filtering mobile devices are getting much better at supporting person-to-machine conversations. Personalized Search moves along a different vector to try to provide a more accurate way to recognize spoken commands or search terms consistently.
Categories: Articles