Day One of Microsoft Inspire 2023, an annual event for the software giants myriad partners, included a formal introduction to the packaging and pricing of two Generative AI-informed products:
- Bing Chat Enterprise: A refined version of the GPT-informed chatbot embedded introduced to users of the Bing search engine in February
- Microsoft 365 Copilot: A premium service that integrates a conversational personal assistant (co-pilot) to respond to natural language queries or instructions based on “business data in Microsoft Graph – that’s all your emails, calendar, chats, documents and more.
Both products are offered in response to growing demand for businesses to enable employees to take maximum advantage of the Large Language Models (LLMs) cultivated by OpenAI in conjunction with Microsoft. Their joint debut is evidence that Microsoft will be very detail-oriented in efforts to amortize the $12 billion or so it has invested in OpenAI, starting when it put down US$1 billion in 2019. Since that time, the two companies have deployed OpenAI’s models across a number of products targeting individual users across both “consumer” and “enterprise” markets, including GitHub Copilot, DALL-E 2 and ChatGPT. Today the user count approaches (and may exceed) 200 million. They are already figuring out how they can use a chat-based interface to answer questions, draft emails, prepare presentations and the like.
This is Not a Test (Beta or Otherwise)
Bing Chat Enterprise is officially available “in preview” starting now. It is included at no additional cost in Microsoft 365 E3, E5, Business Standard, and Business Premium. In the future, it will be available as a standalone offering for US$5/user/month in most geographic areas.
Microsoft 365 Copilot has been in service to about 600 Early Access Program customers since May. By formalizing the US$30/user/month pricing Microsoft signals that General Availability (GA) is not far away and is already being put to work at KPMG, Lumen and Emirates NBD. Microsoft employees are also putting the service through its paces by discovering and defining how Copilot supports their everyday business needs.
The View from Product Marketing
Microsoft’s product managers have been watching user adoption closely to gain insights into the messaging and training that will be required to overcome end-users concerns and objections. Protecting enterprise data appears to be at the top of the list. In an analyst briefing, TJ Devine, Director of Product Marketing for Microsoft 365, emphasized that these products protect all business data by making sure it doesn’t leave the enterprise. Microsoft has “no eyes on the data” and it is not saved anywhere. In addition, every new instantiation “inherits your security, compliance and privacy policies.” Issues like right to be forgotten are baked into the approach.
At the analyst briefing, TJ Devine was joined by Jared Andersen, Director of Product Marketing – Search and AI. Collectively, they described how, in general, businesses are “both excited and terrified” about the impact of Generative AI on their operations. They are excited to see how employees are discovering, defining and refining how Generative AI fits into their personal workflows. They are “terrified,” thanks to known issues surrounding “hallucinations” and non-explainable responses, as well as concern over how enterprise and personal data may be used for training purposes.
Highlighting Differences Between “Training” and “Grounding”
Microsoft 365 Copilot is “grounded in your data,” Devine explained. It has access to documents such as emails and other documents may be need to prepare a presentation or draft a blog post. Neither the prompts drafted to elicit those responses nor the corpus of data embedded in them comprises training material for the foundation LLM, in this case the Azure hosted version of GPT-3.5 or GPT-4. In drawing the distinction between training and grounding, Devine elaborated on this point by explaining that “the model does not train on your input… We don’t need to… It is just a pattern recognizer” that is based on something like a trillion parameters.
By providing this vivid image, Microsoft should allay concerns that large enterprises may have about the trustworthiness of LLMs and the need to protect customer and employee privacy.
Categories: Intelligent Assistants, Articles