The AWS Summit – New York City served as the perfect venue for the cloud-computing giant to bring its public up-to-speed on the many advances it is fostering for machine learning and generative AI. Chief among them was the “preview” of Agents for Amazon Bedrock, which is designed to provide soup-to-nuts orchestration of a “Bring Your Own LLM” approach to building Enterprise Intelligent Assistants, among other things. Its logic is summarized in an anonymous July 26 blogpost:
“Amazon Bedrock now supports agents, a new, fully managed capability, that enables generative AI applications to complete tasks in just a few clicks – based on organization data and user input without any manual code. Agents for Amazon Bedrock orchestrate interactions between FMs [foundation models], data sources, software applications, and users and automatically execute APIs. Developers can easily integrate the agents and accelerate delivery of generative AI applications saving weeks of development effort.”
Leveraging More Than A Decade of Experience with Generative AI
By way of background, Amazon Bedrock was introduced in April 2023 as a managed service designed to accelerate development of generative AI applications. It enables enterprise developers to choose the most appropriate foundation model from a suite that includes Amazon’s own Titan, Anthopic’s Claude 2, the multi-lingual Jurassic-2 from AI21labs, Command and Embed from Cohere, or Stable Diffusion from Stability.ai for image generation. Note the absence of OpenAI’s GPT-4 or Google’s Bard.
AWS manages all the infrastructure required to access these LLMs securely and privately. Most importantly it enables companies to “privately customize FMs” using their own data, including what Opus Research calls “Conversational Intelligence,” i.e. the transcriptions of actual conversations between a company’s agents and its customers. Amazon Bedrock’s tools enable developers to use this “incredibly valuable IP” in a way that is secure and private. How it is used and shared is completely under their control. We recently wrote about IBM’s Watsonx.ai, which offers a similar set of capabilities to enable an organization to fine-tune an LLM using its own data.
Agents Make Orchestration Easier
An “agent” is a component of a computer program that acts as an intermediary between the user and a set of tools or resources. Agents connected to an LLM can understand user input in natural language and make decisions, based on that input, about what actions to perform next. Agents gain their power by accessing tools or APIs, such as Google Search, weather APIs, and many more. Depending on the user’s query, an agent determines which tools to utilize to provide the most relevant and accurate response, or to initiate a transaction. Essentially, an agent helps users by taking their requests, understanding them, and then carrying out the necessary actions on their behalf using the available tools.
Langchain, the open source framework for building applications using LLMs, is also built around the concept of agents that can be instructed to access specific tools to carry out tasks based on a user’s requests. Amazon Bedrock is now offering agents within the framework of its own offereings for developers.
The CX/CCaaS Connection
For use cases, AWS anticipates some of the most popular applications for Generative AI. These include generic “text generation,” meaning the creation of pieces “original content” like social media posts or webpage copy. Chatbots are another anticipated use case, as is conversational search to answer questions or find information from large databases. Yet, as James Stephen points out in this article in CX Today, an AWS spokesperson explained that Amazon Agents for Bedrock do “not deliver self-service experiences for customer service use-case(s).” Rather, they serve as tools for a programmer/developer to build services that perform customer service and other multi-step tasks in response to natural language prompts.
AWS reports that its preview customers are already excited about the capability to customize the generic LLMs to perform domain-specific functions that are important for differentiating their business. For some, this is a simple concept, but AWS emphasizes that they perform some heavy lifting to replicate foundation models while privately retaining control over how enterprise data is encrypted and used. As the introductory blogpost explains, “Amazon Bedrock makes a separate copy of the base foundational model and trains this private copy of the model.” Enterprise data are not used to train a publicly available LLM or exposed to the global Internet.
Agents for Amazon Bedrock is a new fully managed capability for generative AI applications. These agents enable tasks to be completed based on user input and organizational data without a developer to manually code complex sets of actions. By integrating agents, developers can accelerate the delivery of generative AI applications and customize functionalities while maintaining data privacy and security. Given that the AWS-based infrastructure is also used to power Amazon Connect, the cloud-giant’s contact center offering, it’s easy to see how Agents for Amazon Bedrock can serve as an important foundation for creating self-service chatbots or voicebots.
Categories: Conversational Intelligence, Intelligent Assistants, Articles