Salesforce Introduces Einstein GPT Trust Layer

Generative AI offers significant productivity improvements for many areas of business. Salesforce has aggressively forged ahead with incorporating Generative AI into its suite of CRM and business productivity solutions. However, Generative AI is still immature and poses risks. As Salesforce puts it in a recent blog post: “Company leaders want to embrace generative AI, but are wary of the risks – hallucinations, toxicity, privacy, bias, and data governance concerns are creating a trust gap.”

The Einstein GPT Trust Layer, recently introduced by Salesforce, addresses the challenges and risks associated with Generative AI, providing a set of trust and protection services. This new layer acts as a safeguard, preventing Generative AI from exhibiting unwanted behavior and ensuring data privacy and security. By integrating the trust layer into the platform used by developers to create functionality with LLM models, Salesforce aims to bridge the “trust gap” that company leaders often face when considering the adoption of Generative AI.

The Einstein GPT Trust Layer comprises several key services, each serving a specific purpose:

  • Secure data retrieval: This service focuses on ensuring the security of data used by the Generative AI models. By implementing robust security measures, such as encryption and access controls, the trust layer protects sensitive information from unauthorized access and potential breaches.
  • Dynamic grounding: Dynamic grounding refers to the process of aligning the output generated by the Generative AI models with the intended context and purpose. This service helps prevent the generation of irrelevant or misleading information by enhancing the accuracy and relevance of the AI-generated responses.
  • Toxicity detection: To address the concern of generating harmful or inappropriate content, the trust layer incorporates a toxicity detection mechanism. This service analyzes the generated content for potentially offensive or harmful language, helping to mitigate the risk of inadvertently producing toxic outputs.
  • Data masking: Data masking is a crucial component of data privacy. The trust layer employs techniques to mask personally identifiable information (PII) or any other sensitive data that may be present in the prompts or messages returned from the Generative AI models. By obfuscating sensitive data, this service ensures compliance with privacy regulations and safeguards user information.
  • Zero retention: As an additional privacy measure, the trust layer adopts a zero retention policy. This means that prompts sent to the Generative AI models are not stored or retained, reducing the potential for unauthorized access or data leakage.
  • Auditing: The auditing service provides transparency and accountability by logging and monitoring the activities related to Generative AI usage. It allows organizations to track and review the interactions between users and the AI models, facilitating compliance with regulatory requirements and enabling the identification of any potential issues or biases.

Credit should be given to Salesforce for proactively integrating services that enhance reliability, trustworthiness, and ethical usage of Generative AI solutions. The Cloud CRM Giant is not alone in these efforts. Indeed, checking these boxes should be table stakes for all solution providers helping customer care organizations evaluating and experimenting with potential use cases for Generative AI. The short list of instances that have an immediate business impact include call summarization, intent modeling, and sentiment analysis. Salesforce’s Einstein GPT Trust Layer represents a model for other solution providers in their quest to leverage the benefits of AI while minimizing potential pitfalls.



Categories: Conversational Intelligence, Intelligent Authentication, Articles