Vlingo is starting to take on a serious resemblance to Siri with its latest release on Android-based phones. As with its iPhone-based offering, there is voice-activated “deep linking” into the apps that support frequent transactions, like restaurant, theater and travel reservations. But the latest version for Android is much more multimodal because it encourages users to enter commands in text form through a newly designed Action Bar.
There’s a video demonstration embedded in this blog post on Vlingo’s site. As its author explains, the Action Bar uses the same “predictive intelligence” for both voice recognition and typed commands. This allows users “to initiate actions when it’s not as convenient speak to their phones – like on a noisy bus or quiet conference room.”
Well said, and much needed. As voice-enabled applications mature, developers have come to recognize that all mobile commerce apps must support multi-modal input. This positioning is especially important on Android-based devices where, as we noted back in August, Google has made significant investment in rapid, seamless access to speech recognition for command and control of a multiplicity of apps, features and functions on the Android, as well as dictation for populating search boxes or messaging utilities.
Mobile subscribers are the beneficiaries of the efforts by Google, Vlingo, Apple (which owns Siri), Nuance and a handful of other firms who are goading each other on to improve the mobile user interface. Improving recognition accuracy, as well as predictivity (which I know is not a word) is the task at hand and these technology companies are continually raising the bar in what is much more than “hands-free to hands-free” combat.
Categories: Articles