Knowbl, a Conversational AI specialist whose products and services dramatically reduce the time and effort it takes for a company to build and customize a branded bot, has received seed-round funding from a group led by Vestigo Ventures, an early-stage venture capital firm headquartered in Cambridge, MA. The business value of Knowbl’s solutions is not lost on Ian Sheridan, cofounder of Vestigo and member of Knowbl’s board. In the press release he observed that “Knowbl is poised to accelerate enterprise adoption of the digital concierge experience,” citing early success and traction in highly regulated industries like financial services.
Terms of the deal are not publicly disclosed, but all investment in Conversational AI (particularly that of a generative nature) are dwarfed by Microsoft’s $10 billion stake in OpenAI, whose ChatGPT has provided the sort of technological break-through exemplified by the iPhone (for smartphone users), Netscape (for Web browsers) or perhaps TikTok (for user generated content). After its introduction in November 2022, ChatGPT reached 1 million users in roughly five days. That was significantly faster than the 74 days it took Apple to propagate the iPhone; two and half months for Instagram; five months for Spotify or 10 months for Facebook. For the record, it now exceeds 100 million monthly active users, having eclipsed that milestone in a mere 2 months, compared to TikTok’s 9 months and Instagram’s 30.
It is seldom productive to treat new technologies as participants in a horse race because they are not on the same track in the same time frame. ChatGPT came on the scene to satisfy pent-up demand for the sort of “natural” user interface to Large Language Models (LLMs) that are pre-trained to provide elegant responses to prompts that had for nearly a decade been generated only by developers or well-credentialed professionals. IBM, Google, Meta, Amazon, Baidu, Oracle and a few others have invested in the hardware and software to amass tremendous amounts of data. They have also invested in technologies like Deep Neural Networking (DNN) and “cognitive” computing that enable computing platforms to understand natural language input and generate responses that are impressively human-like in their responses.
ChatGPT is Not New
As Yann LeCun, chief AI scientist at Meta pointed out in January, “In terms of underlying techniques, ChatGPT is not particularly innovative…”It’s nothing revolutionary, although that’s the way it’s perceived in the public.” His statement is even more true when thinking of precursors in the LLM realm like Google’s DeepMind or OpenAI’s own GPT’s (GPT-1, GPT-2 and GPT-3, so far with GPT-4 already in use on a limited bases). Here’s where it is important to recognize what “GPT” stands for: “Generative Pre-trained Transformers”. I like to think of that in reverse order.
- “Transformers” are not the autonomous robots of moviedom; rather they are the types of artificial neural network architecture that support transduction or transformation of input sequences to support such things as natural language processing, machine translation or summarization.
- “Pretrained” refers to language models that are subjected to very broad pattern recognition to support the sort of prediction of the next word that is the underpinning of both natural language processing and generative AI.
- Finally, Generative models are able to produce text output that is indistinguishable from prose written by humans. It is this aspect of ChatGPT’s performance that seems to have been most impressive to users.
ChatGPT is a triumph in packaging Conversational AI that, in the process of fostering mass adoption, also exposed multiple deficiencies and shortcomings in the current “state of the art”. The hundred million users acting as beta testers were not shy about touting their experience on social media and many paid analysts and reporters were more than happy to try to report on their negative experiences. People would do the equivalent of Googling themselves by prompting ChatGPT to “provide a 200 word biographical sketch of [your name here]. The results were always written in a confident tone but often departed from fact. For example it would “make up” educational milestones that were never met, awards never won and non-existent siblings or other relatives.
Longer conversations were fraught with another major fault which scientists at Google, among others, referred to as “hallucinations.” The term refers to a tendency for generative AI in association with an LLM to give a totally convincing, but totally made up answer. It’s not a bug. It is a baked in function which for the foreseeable future creates a permanent role for the so-called “humans-in-the-loop” to monitor output and ensure veracity.
Opportunities Abound in Making Conversational AI Enterprise-Ready
In the context of enterprise applications for Conversational AI, hallucinations may be the least of the underlying problems. They can be dealt with by subject matters can be employed either to engineer prompts that optimize results or to monitor and censor output the determine to be incorrect. The major challenges confronting enterprise executives who want to reap the value of LLMs to support specific marketing or customer care objectives is lack of currency, transparency and branding. The content (training corpora) of LLMs is not current. In other words, it won’t provide travelers with current air fare’s or flight schedules. It is not aware of the current state of a company’s inventory a kitchen appliances. Most important, it can’t respond to often asked questions like “where’s my package?” or “what’s my credit card balance?”
That’s where Knowbl steps in with a set of capabilities and engagement model that leverages pre-trained LLMs, Transformers and generative AI to enable businesses to launch branded, purpose-built bots that can provide up-to-date info with little or no training”. The capability was demonstrated in a “live” demo at Opus Research’s Conversational Cloud Conference 2022 in New Orleans. Leveraging LLM’s by adding currency and branded information is an emerging opportunity area that is destined to be fast-growing.
Hyperscalers like Microsoft, Google and Meta have created a rich foundational technology called LLMs on which opportunistic entrepreneurial firms like Knowbl are required to create solutions that meet the needs of businesses and their customers. The near-term impact of ChatGPT, Bard and other conversational front ends is to raise expectations of what is possible (and even anticipated) by end-users while exposing opportunities for us, mere humans, to continue to bring value, initially as prompt engineers or output editors but, in the long term as “heat seekers” who detect where the foundational technologies need improvement and bring in innovative technologies to solve CX and employee productivity challenges.
As Vertigo’s investment in Knowbl signals, innovation options abound and expectations are huge where the giants are investing billions.
Categories: Conversational Intelligence, Intelligent Assistants, Articles