“Markets are conversations.”
These are among the first words in the bible of open source “The Clue Train Manifesto” by Doc Searls, David Weinberger and Chris Locke. My own derivative of said theorem is that “conversations are markets”, a tenant that I hold near-and-dear while establishing the foundation of the Conversational Access Technologies Program at Opus Research.
The word “conversation” evokes the image of contemporaneous, voiced-based communications. Yet we live in a world where both public and private discourse is carried on independent of time and modality through text-messaging, e-mail, “blog-and-respond” or voice mail
Add to the mix, Web-based chat, voice-based FAQ’s and “automated agents” and you begin to perceive the full-spectrum of Conversational Access Technologies – meaning the hardware, software and services that enhance and improve the ability of people to reach and interact with other people or resources that reside in message stores or corporate databases.
Callers are not making their inbound contact in order to carry out a conversation. Nonetheless simple rules surrounding “turn taking” or allowing interruptions are tacitly accepted and can make the difference between what is regarded as a pleasant versus a frustrating experience. Expectations were first honed in the early 1980s, when black phones were the primary mechanism for reaching sales or technical support agents primarily through “800” numbers. The mission was to promote “friction free” (or at least “toll-free”) access over the telephone to people who could answer questions, fix a billing problem, book a flight or close a purchase.
We Call it “Sequential Virtualization”
In 2004, Opus Research estimates that business enterprises around the world could spend as much as $21 billion on conversational access technologies, broadly defined. The list of expenditures includes voice response units, enhanced routing software, “gateways” into corporate IT resources, high-speed IP-based networks, telecommunications services and all the “wetware” (meaning professional services as well as live agent payroll) to carry on effective communications with customers, employees and partners.
This number may look unfamiliar because such expenditures are normally disaggregated into very different accounting categories. IVRs and attendant phone switches have historically been purchased by call center managers or “the Telecom Staff.” Hardware and software for the corporate Web site is purchased by the IT department with the advice of executives from different functional departments (Marketing, HR, Finance.). Meanwhile, purchase of optical fiber and high-speed routers to support corporate Wide Area Networks follows a formulaic approach that is more a function of equipment depreciation schedules and historic purchasing cycles than application requirements.
Getting There is Only Half The Fun
Front-end systems are being virtualized. By 2009, both voice and Web-based conversations will take place over the Internet. That means that today’s switching systems (PBXs, ACDs and key telephone systems) will be replaced by software running on telephony or call processing servers associated with WAN gateways. Precursors for today’s telephony servers are the T-Server offerings from Genesys and Intelligent Call Management (ICM) based software from Cisco. On a smaller scale, the telephony interface managers (TIMs) offered by Intel, Intervoice and others in association with the Microsoft Speech Server are examples of call processing software/hardware combos designed to marry routing instructions to speech processing and conversational applications.
Significant changes have been made to the front end of these conversations. Agents don’t answer the phone right away because voice response systems are used to identify and authenticate the caller (“Please enter your account number and PIN) while determining the purpose of a call and how best to direct it (“1” for account information, “2” for sales.).
A slew of other firms that are poised to simplify intelligent routing of inbound calls to the proper enterprise resource. These include all of the major telephony interface manufacturers – Aculab, Brooktrout, Audiocodes, NMS, Eicon – as well as lesser-known CTI specialists like Apropos, Eon and Upstream Works Software Ltd.
You Can Have Them at “Hello”
Today, the typical inbound contact center needs to identify a caller and the purpose of a call “upfront.” It employs “ANI” (pronounced Annie) or automated number identification to associate a call with its originating telephone number. That’s quick, but imprecise, largely because there is not always a one-to-one association between a caller and his or her telephone number.
Almost all automated scripts then ask for a caller to input an account number and PIN to narrow down possibilities. This protocol adds precision, but can take a significant amount of time. In the case of simple interactions, like checking a bank balance, the process of authenticating a user can account for as much as one-third of the duration of a call.
IBM, Nuance, ScanSoft, Vocera and others to short-hand the process of phone-based identification and authorization by using speaker verification. Speaker verification (sometimes called voiceprint) technologies can be used to make the process of authenticating the caller (as opposed to just the telephone line) much more conversational and speedy. IBM has made conversational authentication a major component of demonstrations of WebSphere Voice Anywhere Access (WVAA). Nuance Verifier is arguably the leader in this category with installations at several financial institutions, communications and transportation companies.
Shortening the amount of time a customer spends off-hook is a win-win from two perspectives. From the enterprise perspective, shortening time off-hook or in queue saves money in terms of network connections. At the same time, it shows respect for the caller’s time and leads to higher levels of satisfaction.
Meeting Web-based Expectations
Voice response units, speaker verification software and call control routines are the key components of front-end conversational access. On the “back-end” are systems that equip agents, voice response units or Web sites with rapid access to all the information needed to provide high-quality service. That information resides in database servers and application servers that house the product files, customer files, transaction histories and all the administration and monitoring systems that strive to provide highly-responsive (dare I say “conversational”) service.
This brings us to a different kind of conversation – interchanges between front-end systems and enterprise data repositories and business processes. Machines can have conversations too. You have to be living under a pretty big rock not to clued in to web services as the lingua franca for these process conversations going forward. From a telco perspective, Parlay/OSA serves a similar conversation-starter for cross-network applications interoperability as wireless and fixed-line operators need to hand off application-relevant context such as location, payment, and presence to other operators and third-party application providers.
The clarity we seek in future CAT Scans is not so much about protocols or even markup languages, but the architecture and new functionality that is unlocked by these many-layered conversations. From this perspective, developers are yesterday’s heroes – make way for Architects as the new sources of value in an IT and network environment that is increasingly aligned on a Service Oriented Architecture (SOA). This is ‘on demand’ territory — and ‘always available’ answers is what speech has always been about.
Categories: Articles