AI in customer experience (CX) has evolved from basic rule-based bots to advanced conversational LLMs that can understand and respond to complex inquiries with a natural flow. Now, a new breed of AI models — reasoning LLMs — are emerging, promising deeper analytical capabilities. Deepseek R1 burst onto the scene and caused a stir, not only because of claims that it had been created with relatively little investment, but also due to the fascination surrounding its chain-of-thought reasoning, which allowed it to think through complex questions in a structured way. But do these models fit into the CX landscape, or are they more suited for niche problem-solving tasks?
How Reasoning LLMs Differ
Traditional conversational LLMs — like GPT-4o, Claude Sonnet 3.5, and Llama 3.2 — excel at understanding intent, retrieving relevant information, and responding fluently. They power chatbots, summarize call transcripts, assist human agents, and even analyze sentiment. Their value is clear: they enhance efficiency and improve customer interactions without excessive computational costs.
Reasoning LLMs, on the other hand, go beyond conversational fluency. These models, such as Deepseek R1 and OpenAI’s o3, are designed to break down complex problems, consider multiple variables, and develop structured solutions. A recent example provided by Rasa, leading open generative conversational AI platform, demonstrated how a reasoning model analyzed a task-based prompt and suggested improvements, showcasing its potential for optimizing processes. Yet, many experts point out that running these models is costly and slow, and businesses are still searching for compelling use cases that justify the investment.
The Challenge of Applying Reasoning LLMs to CX
In customer service, most needs are met by conversational LLMs combined with retrieval-augmented generation (RAG) for accessing knowledge bases. These models efficiently handle FAQs, troubleshooting, and sentiment analysis without requiring the deep analytical capabilities of a reasoning LLM. Many of the most impactful AI-driven CX improvements — such as analyzing call transcripts, identifying trending issues, and enhancing chatbot accuracy — rely on well-optimized, relatively lightweight AI models.
Given the high cost and latency of reasoning models, their role in CX remains unclear. Companies may struggle to justify using them for standard interactions when existing solutions already perform well. While they can break down complex decisions, most customer service interactions do not require the kind of deep multi-step reasoning that these models provide.
Where Reasoning LLMs Could Make a Difference
Despite these limitations, reasoning models could find meaningful applications in CX, particularly for complex problem-solving. In industries with intricate troubleshooting needs—such as telecommunications or technical support—a reasoning LLM could analyze patterns across multiple customer interactions to identify root causes and recommend long-term fixes.
Another potential use is in customer journey orchestration. Instead of merely responding to individual queries, a reasoning model could map out optimal multi-step resolutions, considering various contextual factors. For example, a financial services AI assistant could guide customers through debt consolidation, taking into account loan terms, interest rates, and credit scores to suggest the best course of action.
Additionally, CX leaders could leverage reasoning LLMs for strategic decision-making. By synthesizing vast amounts of unstructured feedback—customer reviews, support tickets, and survey responses—these models could help businesses identify underlying trends and develop more customer-centric policies.
The Future of Reasoning LLMs in CX
For reasoning LLMs to become practical in CX, they need to become faster and more cost-effective. Hybrid approaches, where traditional conversational LLMs handle routine queries while reasoning models step in for specific, high-value cases, may be the most viable path forward. Additionally, seamless integration into existing CX workflows will be crucial to unlocking their potential.
While today’s customer service functions run efficiently on GenAI-powered (and legacy) conversational AI, reasoning models represent an intriguing frontier. They may not yet fit neatly into most CX applications, but as AI continues to evolve, they could reshape how businesses tackle complex customer interactions, offering new levels of insight and problem-solving capabilities.
Categories: Conversational Intelligence, Intelligent Assistants, Articles