Like Billy Pilgrim, Generative AI (GenAI) is unstuck in time. Or maybe we’re the ones who are unstuck as we engage plans to modernize CX and Collaboration technologies. Think about it. March Madness has extended well into April. While watching, we see more than a few “sixth year seniors” running up and down the court, even at this late point in the playoffs. Calendar time has become an abstraction.
My colleague Amy Stapleton provides some very useful perspectives on vendor efforts to incorporate GenAI into enterprise IT infrastructure in this post: “Insights and Highlights from Enterprise Connect 2024”. My take-away from Amy’s piece is that there is “a discernible gap between the promises of AI and the practical realities businesses face when deploying these technologies.” In other words, vendors are offering solutions that enterprise buyers are not yet ready to assimilate.
The Product of Technologies Past
The term “GenAI” refers specifically to a type of Artificial Intelligence (AI) that is capable of generating new content, such as text, images, audio, or even video, rather than simply analyzing or processing existing data. It traces its roots and is powered by technologies that include Machine Learning and Neural Networking technologies that date back to the 1960s. It has been turbocharged in recent years by the introduction of faster processing resources capable of supporting new learning techniques, like Deep Learning, Generative Adversarial Networks and Variational Autoencoders. But we don’t need to know these terms. We only need to observe that systems can generate content in a very human-like way.
In the world of customer care and CX, the people responsible for speech-enabling interactive voice response systems (IVRs) had a preview of what it takes to train machines to understand natural language input accurately and fine-tune the responses to be compliant with company or industry requirements, consistently and at scale. Today’s solutions pretty much eliminate the need to tag natural language input based on “intent.” The “P” in “GPT” stands for “Pre-trained” and that means companies can enjoy significant savings in time and labor costs by leveraging the power of pre-training to accurately understand natural language inputs.
The “T” in “GPT” stands for “Transformer” and it refers to an architecture for neural networks that was introduced in 2017. Rather than old-fashioned word-spotting or pattern detection, it learns dependencies between elements in a sequence. You’ve probably heard it referred to as “auto-complete on steroids” or something like that because it is basically determining what the next word in a sentence is going to be. This enables a system to generate convincing human-like output that, by design, is not grounded in the real-world. So-called “hallucinations” are therefore more of a feature – not a bug – associated with this approach.
Building Up to the Gap Year
GenAI is the precocious home-schooled student that has demonstrated the uncanny ability to learn basic facts, math skills, programming acumen, and then “test well” when exam time comes around. These skills propel the prodigy to an early graduation and a wide world of opportunities, as well as high expectations for success, and riches. With a penchant for attracting attention on every social medium and in the popular press, the world is, effectively, monitoring the student’s progress, as well as pratfalls.
All the attention has made matriculation to the so-called “next level” difficult. Expectations are high, but real-world implementations are modest. Demos promise automated personal assistants that can book travel, schedule meetings, write promotional copy and build compelling PowerPoint presentations. Reality brings us meeting transcriptions, call summaries, and glorified FAQs. It has become increasingly clear that GenAI will benefit from a “gap year” to discover more about its potential, achieve maturity and gain acceptance in the real world.
Whose Gap Year Is It?
GenAI needs time to find itself. Machine Learning is not a thing. It is a long, ongoing process which has been accelerated by the advent of new models, architectures and hardware. Giving GenAI its gap year provides “us humans” with additional time to define our own roles and responsibilities vis a vis the new tech. At a minimum, these include determining:
- How to orchestrate integration of GenAI with existing workflows (both for customers and employees),
- What domain-specific or company-specific information and knowledge should be used for training the GenAI or for augmenting its reference data, a la RAG (Retrieval Augmented Generation), and
- Where and how to review and place guardrails on the content of responses that a GenAI-informed bot will provide both to customers and employees.
Because the LLMs that fuel GenAI are constantly learning, and new models are introduced in rapid-fire manner, we humans feel pressured to move quickly in our efforts to assimilate them. Failure equates to… well… failure. Giving GenAI a gap year to find itself and refine its world view gives the humans that have to define its role in the enterprise additional time to address all levels of change management required to insure success. It provides time to address the other gap that plagues enterprises in the time of GenAI, the one between the solutions that leading CCaaS and UCaaS infrastructure providers offer and those that companies feel comfortable putting into practice.
One thing is for certain, human intervention is a prerequisite. We have a responsibility to oversee the training process (curate training data) and we have to be there to review output for the foreseeable future. In the long run, GenAI will have positive impacts on profitability, employee efficiency and customer experience. Instead of predicting yet-another decline into AI winter, we have a chance to define realizable goals for a well-trained GenAI, as well as the live customers and employees who stand to benefit from its maturation.
Postscript: Reality Check
My idea of a gap year is a thought experiment, rather than a prescription. AI thought leaders have already divided into two camps: “Believers” and “Skeptics”. Members of the former advocate “full-speed ahead” on a path toward autonomous AI assistants. Those in the latter will continue to raise issues about ethics, transparency, privacy, bias and compliance. Solution providers fall squarely in the first camp and they will continue to introduce new features, functions and services built on the latest release of foundation LLMs or their smaller cousins, domain-specific language models (DSLMs). Enterprise decision makers, along with many journalists and analysts have joined the Skeptics, collectively chronicling failures and predicting an impending AI Winter.
Recognizing that GenAI’s development will not come to a halt during the gap year, I encourage a third way. Believers and Skeptics work together to ensure that GenAI comes of age complete with the proper guardrails and opportunities for human intervention. Both need to recognize that the precocious student is destined to accomplish great things when it finally lives up to its potential. We, the prospective beneficiaries, should be monitoring GenAI’s progress and influence its development while helping it to find gainful employment.
Categories: Intelligent Assistants