Open Source Grok – Should We Care?

For those of us in the world of customer experience and contact centers, large language models (LLMs) are nothing new. We may not have reached Gartner’s Trough of Disillusionment yet, but it’s possible we have passed the Peak of Inflated Expectations. Or have we? Either way, things in the world of LLMs remain hot. Elon Musk, never one to shy away from stirring the pot, created more news by delivering on his promise to open source Grok, his home-grown LLM.

Should those of us in customer experience care? We may not need to care too deeply, but it never hurts to stay informed. Let’s face it, we all know by now that language models perform remarkably well at so many useful tasks, from call summarization, to classification, to generating answers. Most companies and CCaaS still rely on the frontier proprietary language models like OpenAI’s GPT-3.5 and GPT-4 for natural language processing tasks. But these models come at a steep cost, with usage fees that can quickly add up. So the release of Grok, a powerful language model that could democratize access to cutting-edge AI capabilities, is certainly newsworthy.

Freeing Grok is Significant for Several Reasons

Open for Business: Released under the permissive Apache 2.0 license, Grok is free to use for any purpose, including commercial applications. This removes a major barrier to entry, as most open source models have come with restrictions or usage fees. The availability of a potent language model at no cost could usher in a new era where companies can leverage AI technologies without incurring ongoing expenses.

A Blank Canvas: Grok is provided as a pre-trained base model, meaning it hasn’t undergone any fine-tuning or censorship. While this “raw” state means the model can’t be deployed out-of-the-box, it also presents an opportunity. Companies can fine-tune Grok using their own data and guidelines, potentially achieving better performance than with pre-filtered models.

A Mixture of Mastery: Grok employs a cutting-edge “mixture of experts” architecture, which allows it to dynamically allocate computational resources to different parts of a problem. This innovative approach can lead to more efficient and effective reasoning, particularly on complex tasks.

Size Matters: With a staggering 314 billion parameters, Grok is among the largest language models ever released. Larger models generally exhibit stronger reasoning capabilities, so Grok’s size suggests impressive potential once fine-tuned for specific use cases.

Big Model, Big Footprint: Of course, such immense power comes with hardware demands. Grok requires a whopping 300GB of storage and benefits from GPU acceleration. While this might be out of reach for smaller organizations, larger companies and cloud providers should have the resources to harness Grok’s capabilities.

Another Evolution in the AI Landscape

Even those of us in customer experience and adjacent fields would do well to stay apprised of these rapid advancements. The release of such a powerful open source model could presage an era of highly capable, very cost-effective AI tools for enterprises and service providers alike.

The dominance of one foundation model provider – Open AI – and its ability to set the pricing regime for the entire market, may be on the wane. In the same vein, the ability of companies to censor models and how they perform, prior to offering them to the market – Google’s Gemini as a recent example – may also be changing. The democratization of AI seems to be upon us – and companies that understand how to leverage the open source models stand to gain a competitive edge.



Categories: Intelligent Assistants