Beyond Compliance: GDPR, ePrivacy to Drive Demand for Intelligent Assistance

Content marketing mills around the world have gone into overdrive in their treatment of General Data Privacy Regulations (GDPR), the regulatory framework that is set to go into effect in late May with the objective of strengthening protection of personal privacy. E-mail campaigns from system integrators and accounting houses appeal to IT and finance executives and show ways to harden their infrastructure with end-to-end encryption and strong authentication to protect against hacks that expose personal data. At the same time self-described legal and subject matter experts from the advertising, publishing and digital retailing experts have pored through the letter of the law to find the loopholes that global brands and businesses are bound to cite as they make minimal changes in the routine practices that define AdTech and MarketingTech.

The common thread among subject matter experts foments fear, uncertainty and doubt surrounding “compliance” to the strictures of GDPR and the closely related set of regulations surrounding ePrivacy. Yet, with the deadline looming and amid the gnashing of teeth caused by concern over whether storage resources are hardened against unwanted hacks and other customer records are sheltered from unwanted mining by third-party aggregators, I strongly believe that it’s time to look beyond mere compliance and address a the obvious need for brands and enterprises to provide individuals with the tools they need to take charge of the digital exhaust they create through both online and real world activities.

An Obvious Role for an Intelligent Assistant

When implemented correctly, Intelligent Assistants (in the form of digital personal assistants, chatbots or virtual agents) are poised to play important roles in the shift of power from advertisers, publishers, search specialists and social networks directly to individuals. At last year’s Intelligent Assistants Conference in London, Andy Tobin from Evernym was very clear in his assessment of the predicament of today’s major brands. By tracking the activity of individuals as they search, browse the Web or otherwise make contact with their favorite businesses they aggregate and store personal information at their own peril. Under the piercing light of GDPR and ePrivacy laws, the information that they have gathered appears to be a “toxic asset” whose mishandling can lead to huge fines and destroy careers.

The CEOs, CIOs and CSOs of major brands have reputations to protect and financial incentive to outsource management of personal data. Who better to outsource it to than the individuals themselves through a decentralized, self-sovereign identity and personal information management framework.

The rub has been that identity management, along with establishing the rules that govern when personal information is shared is very complicated. It takes work, and over the years we’ve learned that individuals just don’t do it. They don’t keep profile information up to date, they don’t go to the privacy controls on Google.com or FB to make sure they reflect their current “state”. And everyone has a bad habit of posting way too much information for interpretation by the analytic resources that brands maintain in order to target the placement of advertising or craft what they think is a relevant email marketing piece.

“Intelligent Assistants” – a term that Opus Research uses to define a range of personal digital assistants that span chatbots and digital virtual agents – have the ability to be an AI-infused resource that understands the individual it is interacting with and appreciates his or her current “state.” It can recognize intent and share just the information that is needed to fulfill that intent. The ability of an Intelligent Assistant to “understand” what an individual wants and provide what it takes to get something done is on display every time he or she says “Hey Google, Good Morning” to the Google Home “smart speaker” in the bedroom. At that point, if the individual uses Google Calendar, Gmail and Google Maps, the Google Assistant takes over. It recites the news and weather, describes the first appointment for the day and calculate what time to leave the house in order to get to an appointment on time.

“Informed Consent” in this context is a possibility and should be a necessity. More likely it is an acquired taste. Google invests considerable effort in sending emails to Assistant users to suggest how to make the overall experience of interacting with it better. Over time, each user engages in a process of co-creating an individualized (or personalized) experience. Just as my search results using Google’s mobile app will different from my wife’s or yours, the actions that Google Assistant undertakes on my behalf are different from what it would do for any other person. It is informed by what it knows about me. Some of that information I supply knowingly and explicitly, but much of it is the result of reading my Gmail, capturing my search terms, taking stock of my browsing habits. In this respect, Google doesn’t need tracking cookies – those pieces of code that reside on Web sites and know where I came from and where I’m going as well as what I did while I was there. By starting in Google and staying in Google, I’ve made Google privy to my activity.

Google comes closest to being a trusted entity that already knows a heckuva lot about each individual, but it may (or should) be disqualified because it’s profitability has long been predicated on delivering advertising to the pages you visit based on “AdWords” or search terms that they track on company’s behalf. Google Assistant may feel like your personal agent, but Google at this point in time makes its money by treating you and your business as the product that is delivered to advertisers for a fee.

Amazon Alexa suffers from a similar conflict of interest. You may benefit from how the information Amazon has amassed on your buying habits, preferences, frequency and loyalty is analyzed and employed to support relevant recommendations and rapid fulfillment of new orders. Yet, once again, you are the product or captive audience of a single vendor who “gets you” but uses its knowledge of you to enhance its profit.

There’s an Intelligent Assistant in Your Future

We entered 2018 with acute awareness that there is a need for easy-to-use tools that enable individuals to take control of their digitally enhanced lives by using their own words. Siri, Bixby, Alexa, Google Assistant and even Cortana demonstrate constant improvement in the ability to understand words, recognize intent and even sentiment to provide services. Thousands of messenger bots – on WeChat, Facebook Messenger, Slack and elsewhere – are fully capable of recognizing and fulfilling the intent of individuals who communicate through text. Incidentally this makes buttons, “carousels” and emojis part of a robust Intelligent Assistant’s vocabulary. Both awareness and use of these resources is on the rise. For instance, a Conversational Commerce Survey conducted by Cap Gemini in November 2017 shows that the 40% of respondents indicated that they “would use a voice assistant, instead of a mobile app or Web site” three years from now. That’s up from 24% today.

My opinion is that the percentage of regular users will be much higher. Trust in the service provider and confidence that the IA can succeed in understanding an individual’s intent will be the prime determinants of popularity. At Conversational Commerce Conference-London (May 8-9) we will tackle major questions surrounding whether Intelligent Assistants will emerge as trustworthy agents by providing natural language based tools that are protective of an individual’s privacy, the “right to be forgotten,” portability of personal information and self-sovereign identity management in panel discussions and related Deep Dive sessions.



Categories: Conversational Intelligence, Intelligent Assistants

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.