Nuance Promotes Natural Interfaces at Lackluster CES

This year CES lacks the introduction of any breakthrough product or gadget. As Wall Street Journal tech reporter Don Clark explained his annual coverage of the show, “Entirely new product categories seem scarce as America’s largest gadget show convenes here this week. But there is plenty of action in making today’s products smarter.” Clark’s coverage then focuses on new chips and processors that support sensors that make devices more “context aware.” Sensing and monitoring the activity around a device is what will provide it with the ability to anticipate its owner’s identity and intent. So at the very minimum, we can expect improvements in predictive recognition of its user’s intent, leading to faster task completion.

Meanwhile, Nuance Communications took a three-pronged promotional strategy to CES. Recognizing that conversations between individuals and the connected world span mobile phones and computers as well as electronic devices in the living room, kitchen and car, the company’s “natural interface” merges conversational input by voice as well as keyboarding, natural language understanding across multiple input modes, “crowdsourcing” to understand new words and grow grammars and partnerships with device makers and chip manufacturers to optimize performance.

Already known to be providing voice recognition and synthesis to support Apple’s Siri, Nuance is now collaborating with ZTE, the giant Chinese ODM (original device manufacturer) to bring its natural user interface to Android-based devices. The new mobile devices from ZTE will support conversational interactions in 25 languages and will also feature a “Car Mode” that ZTE is showcasing at CES 2013.

CES is the launching pad for several automobile based initiatives which will be fleshed out further at the more mobile-focused shows like Mobile World Congress in Barcelona, Spain next month and CTIA2013 in Las Vegas in May. Nuance has established its credibility in this space by announcing the integration of Dragon Drive, its natural language understanding platform for the connected car, into select models from Hyundai. It will support voice dialing, text entry for messaging and single-statement entry of destination information for navigation systems.

Hyundai is also supporting Nuance’s “Welcome Mode,” which helps the car behave like a personal assistant. It uses speech synthesis to greet the driver by name and also “remembers” the driver’s personal playlist for music and stores common destinations on the navigation system.

Dragon Drive is also being integrated into selected models from Chrysler Group. It will be part of Chrysler’s UConnect service that supports voice-based, natural language entry of search terms for local businesses as well as the ability to listen to SMS/text messages and to dictate responses – hands-free. UConnect is available initially on the 2013 Ram 1500 and the SRT Viper.

For owners of Ford models equipped with SYNC technology, Glympse now offers a voice-activated version of its location-sharing service. According to Ford, the service is available immediately to over 1 million drivers who are set up to share their location in response to the spoken question, “Where are you?” The service is made possible through Ford’s AppLink technology which we described here in April after talking with John Ellis, the auto maker’s Global Technologist for Connected Services and Solutions.

It’s not exciting by the customarily hyped-up standards of the Consumer Electronics industry; nonetheless speech processing technologies, coupled with natural language understanding and context-awareness are proving value in efforts to support truly personal services both at home and on the road.



Categories: Articles

Tags: , , ,