Opus Research returns for VBC London 2014 (November 18-19, The Tower Hotel) to showcase a broad array of technologies that support simple, secure and trusted communications over the web, mobile phones and tablets.
Dubbed “The Voice Security & Authentication Conference,” VBC London 2014 brings together executives and decision-makers who are integrating voice into multi-layered and multi-factor communications for authentication and security fabrics.
These are exciting times for voice biometrics solutions providers, with Barclays rolling out “passive” voice-based authentication to its entire customer base, and rapid-fire introduction of services that combine voice with other biometrics on mobile devices.
VBC London 2014 will provide a single venue for attendees to take stock of solutions designed strengthen the bonds of trust between enterprises, government agencies and their customers and constituents.
This leading global industry event features case studies and real-world implementations that use voice biometrics, in conjunction with other technologies — including risk analytics, facial recognition, phone-printing and other device signatures — to lower the incidence and impact of phone-based fraud while making the world safe for mobile commerce, transaction authentication and cloud computing.
It’s where enterprise decision-makers convene:
- To learn how your company, department, or agency can carry establish trusted lines of communications over the phone or Web
- To identify secure, yet convenient ways to prevent fraudulent access to enterprise data or private networks
- To determine how partners or competitors employ the latest multi-factor authentication technologies to pursue their mobility strategies
- To see the latest products and services from a growing array of solutions providers
Attendees from past events include representatives from: Accenture, Barclays, Google, Citi, Wells Fargo, Standard Chartered, Australian Tax Office, Vodafone, Intel Corporation, Banco Santander, ING, HSBC, BBVA, Lockheed Martin, JP Morgan Chase, T-Mobile, RSA, UBS, DBS, OCBC, Raiffeisen Bank, Microsoft, Fidelity, Atos, MasterCard, Visa, American Express, IBM, Betfair, France Telecom, Standard Life, RBS, Unisys, Investec, Vanguard, L-1, and many more.
When registering be sure to take advantage of the “Super Early-Bird” rate (£349.00) to save 50% off full conference price (ends August 15th).
On July 15, Apple and IBM forged a partnership whereby Apple appointed IBM as its exclusive partner for bringing iOS- and MacOS-based apps and services into the enterprise marketplace. For IBM, the deal is about promoting its MobileFirst strategy for enterprise applications that run on Apple’s popular smartphones, tablets and desktop systems. For Apple, a relationship with Big Blue proves to the world that its devices are enterprise hardened and its APIs are open enough to integrate with the backend systems that support the enterprise apps (especially ERP, analytics, calendaring and security) that are the bread-and-butter of IBM’s cadre of professional services personnel.
Consensus among both analysts and journalists is that this is a “good deal” for both companies and their customers. On the one hand, Apple will find that IBM is a very effective, and lucrative, reseller of its products, services and support. On the other hand, IBM is fulfilling a recognized need to re-invigorate its eighteen month-old MobileFirst initiatives by achieving “native” support of apps and software infrastructure running over Apple’s iOS-based devices. Promising “app integration and management” as a major part of its managed services offering, IBM will make sure that there is not so much as a hiccup when iPhone owners press the dreaded “upgrade now” button to initiate the installation of the latest rev of the operating system. Surprisingly, little or no mention has been made of the devices’ end-users, meaning enterprise employees and customers.
Amidst all the talk of transformation of enterprise processes powered by analytics and improving the mobile customer experience, it is a shock that the neither Cognitive Computing (aka Watson) nor mobile virtual assistance (aka Siri) were brought up in company issued publicity (or, for that matter, during conference call to provide details on the deal and answer analysts’ questions). IBM, by making this a MobileFirst, rather than a Cognitive Computing, initiative made it clear that its intent is to leverage 100+ industry specific apps for iOS, delivering “native” instantiations of the types of apps that support employees of large scale firms in banking, insurance, telco, retail, government, travel/transportation and healthcare. Next is tight coupling of back-office (ERP) systems and iPads, iPhones and Macs, leveraging “advanced APIs” into the “systems of record” that keep track of transaction history, as well as databases of ecommerce records and application data. Finally, the MobileFirst for iOS approach brings “app integration and management,” which is where we imagine IBM will really make its money by supporing “end-to-end lifecycle management for iOS apps,” including OS downloads and updates. This is a known headache for IT managers in the age of BYOD (Bring Your Own Devices).
As impressive as this managed services and reseller offering seems, it is unbelievable that not a single mention was made of improving the User Interface (UI) or User Experience (UX) by incorporating an Intelligent Assistant function into the “application integration and management” offering. There would be no better way to leapfrog the inevitable competition from fellow enterprise software infrastructure providers (especially Oracle and SAP) than to invoke the assistance of a Siri-like Intelligent Assistant to help employees navigate their options, get questions answered and manage their every day activities.
It should not yet be filed in the category of “missed opportunities.” In the analyst conference call, IBM spokespeople Fred Balboni and Marie Wieck made a point of saying that a huge developer community is already taking advantage of their access to a library of open Apple APIs as well as IBM’s iOS optimized platofrm called “ERP for iOS”. By the rules of serendipity, they are destined stumble upon the Siri API or leverage the goodies that are made available through the Watson developer site: (https://developer.ibm.com/watson/).
But the die is cast. The general perception is that long-time rivals for desktop dominance during the PC age have identified fertile areas for cooperation in the post-PC era. Intelligent Assistants are not (yet) part of the mix and this is definitely not a mobileCX (customer experience) play.
In its effort to maintain leadership in the home monitoring and security field, ADT has taken a giant step by offering highly secure home automation services as well. The newly launched ADT Pulse® Voice App features two-factor user authentication that seamlessly combines voice biometrics with a customer-selected passphrase to provide highly-secure, smartphone-based control of a home’s security system as well as other “smart home” functions, including lighting systems, home entertainment units, garage doors and even door locks.
It’s all illustrated in this video:
To be clear, this is not a conversational intelligent assistant. Instead it is a voice-based interface to the mobile app that ADT introduced in 2010 to make it more convenient for home-owners to check the status and take control of their home security units, as well as the myriad of other functions that ADT anticipates competing smart home technology providers (be it Google, Honeywell, Microsoft, Comcast, GE or any of the variety of companies vying for a share of this emerging industry).
Lo and behold, it takes a security-oriented company like ADT to bring voice biometrics into the mix by enabling users to initiate the service by speaking the passphrase of their choice. ADT also placed a premium on the smartphone’s ability to carry out its assigned tasks quickly and without relying on resources in the cloud. For that reason, both the automated speech recognition and the voice biometric authentication resources rely on “embedded” resources – no “cloud” necessary.
ADT Pulse made its debut in 2010 to give smartphone owners a convenient way to control their home security and automation systems. ADT Pulse Voice was demonstrated at the 2014 Consumer Electronics Show (CES) in January and was featured in a “soft roll-out,” making it available to protected home owners in the U.S. and Canada.
At the FinTech Innovation Lab Demo Day in New York City, SRI International launched its latest spin-off targeting the market for branded Intelligent Assistants in the enterprise space, starting with banks and other members of the financial services industry. The new company, dubbed Kasisto, Inc., benefits from the decades of research and development by SRI, coupled with fruits of product development performed over the past two years in conjunction with Spain’s BBVA.
As I noted here in June 2012, when BBVA and SRI introduced Lola to the world, “In the long run, it is designed to know BBVA customers, what they want to do and then – in a way that is different from others – knows how to do those things and does them. That’s what BBVA means when it says that it is making its services ‘customer-centric’.” We closed the post by noted that, like so many efforts that rely on natural language understanding and machine learning, “it will improve over time.”
Time has past and both companies have deemed the core technology to be ready for formal introduction through the spin-off. Both SRI and BBVA have assigned intellectual property to the venture and are shareholders. Zor Gorelov, founder and former CEO of the cloud-based automated contact center maker SpeechCycle, is the CEO. He tells me that the firm has entered the market ready to take on the major pain points of every financial institutions mobile strategy: “A better UX, Rich in features, easy to discover, easy to navigate.”
As Gorelov observes, both Apple and Google are addressing these issues for mobile devices. However, in his words, in enterprise settings,” simple Q&A will not be enough.” Intelligent Agents must be “conversational.” Gorelov goes on to describe Kasisto’s core concept as “Conversation as a Service.” It includes support of a white label mobile assistant, designed to support mobile banking on a smartphone or tablet by embedding a “floating microphone” on the user’s screen. Just as important, it is designed to enable personnel in the banks’ IT departments to leverage skills and infrastructure elements that are already familiar to them, like Java, HTML5 and XHTML. Kasisto also offers access to the platform through RESTful interfaces and APIs.
Kasisto enters the market offering a “comprehensive technology stack including speech recognition, natural language understanding and generation, and artificial intelligence reasoning.” In effect, it recognizes the intent of the banking customer. It does not attempt to replace existing capabilities of a banks’ online or mobile offerings. Instead it aims to make them “richer” by adding the ability to such things as keep track of context and “normalize” results in order to provide the best answer in the context of a human-like interaction. As an example, when a banking customer asks the assistant to display “all $4 transactions over the past month,” he or she will be shown those that fall within a reasonable range: $3.95…$4.20… etc. (The cost of a latte plus tax in various locales).
The user interface is multi-modal, meaning that it lets bank customers access information and perform simple or complex tasks using their voice or keyboards on smart devices. As SRI Venture’s Norman Winarsky explains, “Virtual personal assistant technology has revolutionized consumer interaction with mobile devices…. Now consumers expect a more human-like experience when interacting online. Kasisto represents a new user experience—one that is context aware, personalized, and more effective.”
Winarsky will be one of the featured speakers at Opus Research’s Intelligent Assistants Conference, which will convene in San Francisco on September 16, 2004.
Twilio, in conjunction with Google’s Enterprise division, is blowing the dust off of the old concept of a Contact-Center-in-a-Box.” With the introduction of Twilio CX, Chromebooks, the inexpensive lap-tops, running the ChromeOS and Chrome browsers, are transformed into agent workstations complete with pre-installed software and connectivity to support voice-, chat- and SMS text-based conversations. The ease of start-up is illustrated in this video:
Each box contains a Chromebook with a Plantronics headset, but it comes as a bundle that includes 7,500 minutes per month on Twilio’s network and the necessary service and support from Google Enterprise. All of this is made available for a fixed, monthly, per-seat fee.
LiveOps, which has built a considerable reputation as a cloud-based contact center solutions provider in its own right, is the first go-to-market partner for the package. Its multi-channel agent desktop has “native” or transparent integration of the Twilio Client. That means that, in the ideal, an agent merely fires up the Chromebook and is ready to take or initiate calls. LiveOps Engage or its close cousin LiveOps for Salesforce will serve as the user interface with hooks into the company’s CRM system. Both companies have long been supporters of WebRTC-based management of audio and video streams, so very little customization is required.
Enterprise customers will contract for the service from LiveOps, which in turn will buy seat licenses for TwilioCX, whose provisioning partner will arrange for delivery of the Chromebook and headset. LiveOps will charge a single subscription price per seat (said to be around $90) that includes the aforementioned 7,500 minutes of time on Twilio’s network as well as monthly payments for the Chromebook.
Contact Center in a box is an old idea that brings a new twist to the “No Capex” promise of the Conversational Cloud. Given its long-ago acquisition of Grand Central and the evolution of Google Voice, I had thought that Google, itself, would have provided the routing intelligence for this type of service. Nonetheless, the Google Enterprise has apparently weighed its alternatives and found Twilio to be the communications platform best suited for supporting communications between companies and their customers via Chromebooks and WebRTC.
West Interactive and Interactive Intelligence Partner to Broaden Cloud-based and Hybrid Contact Center Offerings
Last week, West Interactive and Interactive intelligence formed a partnership to sell, service and support each other’s portfolio. Details of the deal are evolving and more details will follow. Meanwhile, we expect to see subtle, but impactful changes in the roadmaps of both companies.
Each approaches the relationship from a position of strength — especially when it comes to cloud-based and hybrid implementations. West has historically focused on the larger enterprise market, while Interactive Intelligence addresses the mid-market. I clearly see how this partnership enables each of these firms to expand their market presence, whereby West now has a compelling offer into the mid-market and Interactive Intelligence accessing enterprise-level customers.
The announced partnership comes on the heels of both companies making compelling recent announcements to increase the breadth of product offerings and expansion into adjacent markets: West recently acquired SchoolMessager and Interactive Intelligence unveiled PureCloud suite of offerings.
Each of these firms is entering into this arrangement with their eyes wide open. They have worked together over the years and know quite well the benefits such an arrangement can have.
What is in for each of them?
I see West leveraging Interactive Intelligence’s PureCloud benefits such as customer-agent matching and legacy migration. This, incidentally, is something of a win for Amazon Web Services (AWS), whose cloud plays host to PureCloud. Additionally, Interactive Intelligence knows how to sell and support medium-sized businesses and call centers. This is a market West has not focused on, but now is able to. On the other hand, Interactive Intelligence is getting a great partner who is a leader in the hosted and cloud market segment. West has an outstanding portfolio of large customers and the professional services chops to deliver enhanced services and support.
With the addition of AWS and Interactive Intelligence’s own platform, West’s salespeople and sales engineers are more like personal shoppers for its clients, with at least five hosted platforms from which to choose, adding HollyConnects, Genesys and West’s home-grown legacy resource. The two companies will ultimately employ the platform that speeds the process of adding new capabilities efficiently and cost-effectively. What advice would you give them?
The ascent of the smartphone has propelled interactive voice response (IVR) technology well beyond the roles of simple call deflection or agent avoidance. IVRs are entering a new world of choice and customer empowerment. Far from forcing its last gasp, the smartphone has breathed new life into each enterprise’s IVR and voice app infrastructure, augmenting resources that bring both visual and voice resources into each customer’s critical path.
Featured Research Reports are available to registered users only.
For more information on becoming an Opus Research client or purchasing the report, please contact Pete Headrick (firstname.lastname@example.org).
Taking advantage of high-powered computers and servers in The Cloud, long-time speech and text analytics specialist Nexidia is expanding the footprint of its core product well beyond the confines of the contact center. Touting the power of Neural Phonetic Speech Analytics™ to support Large Vocabulary Continuous Speech Recognition (LVCSR), Release 11.o of its Interaction Analytics Platform supports levels of precision in word accuracy enabling the sort of “deeper recognition” of meaning that business decision makers seek when looking for insights that can shape company strategic planning and marketing decisions.
The new architecture and approach promises to scale massively. One of its communications clients, for instance, is already capturing and analyzing 70,000 to 100,000 hours of audio each day and storing it for over a year for future analysis. As Nexidia explains, using LCVSR “underneath” neural networks enables business decisionmakers to make discoveries at the business unit level across 100% of the captured audio in order to support “metrics-based management.” The new release also moves toward real-time analytics, with support of up to 44 languages, assuming the purchase of additional language packs. The approach also enables business managers to merge text input from social media posts and tweets as well.
The benefit for large companies with high volumes of interactions is clear. Nexidia tells us that smaller companies can also benefit when the deployment is made in a multi-tenanted architecture. Medium-sized businesses will be able to perform the same sorts of queries in real time at competitive prices.
Amid talk of earbuds with cords that don’t tangle and a universal product scanner called Firefly, the inclusion of spontaneously invoked tech support called MayDay may have been glossed over at the launch party for Amazon’s “Fire” mobile phone. You’ll recall that the video assistance service launched as a much touted feature of the Kindle Fire XDS last September. The 24/7/365 service agent can be contacted through a pull down menu from any page or app running on the phone. Amazon continues to promise
“less than 15 second” response time before a phone user can engage with a video assistant.
At the launch event, Amazon CEO Jeff Bezos positioned MayDay as an alternative to the dreaded “Droid Forums” – referring to the support communities that Google expects Android users to turn to for technical support. Given that the Fire phone is positioned as the ultimate electronic catalog and order entry device for the broad range of products and electronic content that Amazon can deliver, the real question is “how long will it take for MayDay to morph into the ultimate intelligent assistant and electronic personal shopper.
If the MayDay “rep” continues to be a live agent, it will be a matter of training, workforce optimization and knowledge management. Yet I still maintain that, for MayDay to be done right, it is just a matter of time before it turns into a fully automated intelligent assistant, in which case it will depend on advances in life-like automated speech, natural language processing (NLP), artificial intelligence (AI) and machine learning (ML). The pace of technological development in each of these disciplines has been dramatic. Amazon has a formidable set of intellectual property and demonstrable technical prowess and personnel thanks to a series of acquisitions, including Yap (speech recognition/transcription), Ivona (text-to-speech rendering) and Evi (formerly TrueKnowledge, for NLP, AI and ML).
Given all the other exciting features announced at the product launch, the short shrift allocated to MayDay is understandable; but my guess is that it will be one of the most important features of the new line of Amazon mobile phones.
Baseball may be “the national past-time,” but ordering pizza (and related beverages and side dishes) has to be a very close second. For decades the national chains, like Pizza Hut, Papa John’s, Little Caesars, have engaged in very aggressive advertising and marketing competition to capture share of the approximately $10 billion that U.S. households spend on pizza delivery each year. For decades, the major chains and franchises have invested millions in Web- and phone-based technologies to make order entry simple, speedy and conducive to delivering a piping hot pie in minutes.
Enter Dom, the natural language intelligent assistant integrated with Domino’s mobile app and available for iOS and Android-based smartphones. It is part of the latest release of the Domino’s Pizza app, which is officially called Domino’s Pizza USA version 2.1.0 in the App Store and Google Play. The Domino’s app is a highly-efficient order-capture engine, enabling each smartphone user to designate whether they want delivery or pick up and then presenting them with simple ways to view offers (behind a button called “Coupons”) or to view a full menu that includes pastas, sandwiches, drinks, sides and desserts, in addition to pizzas.
The home screen now includes red icon in the lower left corner. It looks like a microphone with a hat on and carries the “BETA” imprimatur. Tapping the icon wakes up Dom, though he doesn’t identify himself. He simply says “What’ll you have?” or “What can I get you?” in a cheery baritone. He’s pretty open to options. The smartphone’s screen illustrates the menu of options and shoppers can use their own words to “build a pizza.”
It’s very much a “mixed initiative” experience, meaning that saying “I want a large pizza with pepperoni and onion” is very much like touching the radio buttons on the “menu” page to make selections. In other instances, like when a user wants to see the Coupons, Dom will say, “For now, you need to make your selection by tapping it. We can talk more later.”
Dom is the product of work that Domino’s has been doing with Nuance Communications, and is a branded version of Nina Mobile, which made its debut roughly two years ago. Nuance, along with a cadre of companies that include Amazon, Next IT, IBM, IntelliResponse, Interactions, Oracle, Linguasys and a handful of others around the globe are making inroads into banking, insurance, retailing, medicine, higher education, travel and hospitality, and general retailing. Executives from many of these companies will be sharing their ideas and showcasing their solutions at Opus Research’s Intelligent Assistants Conference, in San Francisco in September. Dom’s appearance on the scene signals an acceleration in the introduction of new use cases and implementations. Come to San Francisco in September to learn more and meet the people that are making intelligent assistance more ubiquitous.