DARPA’s Latest Take on AI Illuminates the Course of Intelligent Assistance


Much of the core technology that fuels digital commerce, including the Internet itself, were conceived and initiated at the United States’ Defense Advanced Research Projects Agency (DARPA). Intelligent Assistance is no exception. DARPA recently released a video presentation offering a perspective on Artificial Intelligence. In the video, John Launchbury, Director I2O, DARPA provides an overview of what DARPA regards as the three waves of AI.

Interestingly, the three waves described by Launchbury map well to technical trends in the world of intelligent assistants. Before we look at each of the three waves, it’s helpful to note the four key capabilities that DARPA uses to gauge the maturity level of an AI system. The four capabilities are briefly described below:

  • Perceiving – gleaning information from the external environment
  • Learning – autonomously improving core functions
  • Abstracting – autonomously adapting to new situations and understanding context
  • Reasoning – making correct decisions about best answers based on available knowledge

First Wave – Handcrafted Knowledge
The first wave of AI systems are based on handcrafted knowledge. These systems, built by domain experts, contain rules that describe core processes and knowledge sets of specific domains. Such first wave systems can be very effective at automating repetitive processes or at carrying out functions that require rapid analysis of many possible options. Examples that DARPA offers for efficient first wave systems are automated logistics scheduling programs, chess playing algorithms, and rule-based software such as TurboTax.

Many of the most effective customer self-service platforms in use today rely on rule-based systems. Knowledge of industry verticals is encoded in rule sets that aid the virtual agent in quickly identifying the best response to a customer inquiry. While virtual agents obviously don’t have any underlying comprehension of the question or answer, they exhibit strong “reasoning” skills, in that they can expertly navigate the rule-based scaffolding to surface the best answer.

While first wave AI systems are adept at reasoning, there success is restricted to narrowly defined domains. As Launchbury states in the video, first wave AI is ill-equipped to handle uncertainty. Another downside is that building a truly effective first wave system requires a lot of time and expertise.

Second Wave – Statistical Learning
Second wave AI systems are those built using machine learning techniques such as neural networks. In constructing these systems, engineers create statistical models that characterize a specific domain. They then feed the model large volumes of data to “train” the algorithm in honing its ability to correctly predict the correct result based on the input. Machine learning systems are widely used for voice and facial recognition.

We’ve seen intelligent assistant solutions appear on the market that utilize neural networks to help automate learning. Virtual agents powered by neural networks learn from customer call and chat transcripts and ongoing customer support calls. The goal is to produce a virtual agent that is capable of learning new answers on its own.

As Launchbury points out, however, neural networks are essentially spreadsheets on steroids. When properly trained, they are extremely effective at classifying data and making predictions. However, second wave AI is limited in both its ability to the understand the context of data and to apply reasoning.

First and Second Wave Systems Co-Exist
It’s important to note that first and second wave systems co-exist today. There are powerful intelligent assistant solutions based on robust rule-based systems and equally adept solutions that leverage elements of machine learning and neural networks. While second wave systems, as the name suggests, are an evolution of the technology, they are not necessarily better for all use cases. As Launchbury points out, first wave systems are actually better at reasoning than second wave AI (at least according to Launchbury’s definition of reasoning).

Third Wave – Contextual Adaptation
Launchbury describes the third wave of AI as systems capable of contextual adaptation. These are systems that “construct explanatory models for classes of real world phenomena.” What this means is that third wave systems actually exhibit an ability to comprehend what they are doing and why they make certain decisions. They are not the blind predictive algorithms of second wave systems, nor are they the blind, rigid systems that follow rule sets.

Launchbury provides an example. A third wave system, he suggests, will be able to cogently explain why it has identified an image in a photo as a cat. It won’t respond by saying “because that’s what my predictive algorithms have chosen as the more probable response.” Instead, the third wave AI will say something like: “because the object in the image has two ears, fur, whiskers, and is the size of a typical house cat.”

AI systems with contextual understanding will need far fewer examples to be trained. Instead, they will be trained using contextual models. For example, a system designed to read handwriting will be trained on models of how the human hand moves over a page when writing.

Once the system has learned the model and been given a few examples, it will use the model to reason and to make decisions about what it perceives. Ideally, second wave AI will be able to use these models to abstract, giving it the ability to deal with situations for which it hasn’t been specifically trained. The handwriting reader systems won’t have been trained on handwriting similar to that of a certain eccentric 18th century poet, but it will be able to use its models and past experience to read the poet’s handwriting with near perfect accuracy. It will even be able to explain how it came to its conclusions.

Is Third Wave AI Achievable?
Intelligent assistants, voice assistants, chatbots and the like have all achieved remarkable results over the past couple of years. Automated speech recognition has purportedly surpassed humans, natural language understanding is maturing and being rapidly embedded into many customer facing systems. But so far, none of this technology is truly “smart.” Today’s AI follows rules or blind algorithms and doesn’t comprehend why or how it understands.

Is the contextually aware and adaptable third wave technology described by DARPA achievable? This may be as much a philosophical question as a technical one. In his newly released book, From Bacteria to Bach and Back: The Evolution of Minds, Daniel Dennett, the American philosopher and cognitive scientist, draws a fascinating distinction between organisms that are competent and those that have comprehension.

Ant colonies build remarkable constructs, Dennett writes. Even though ants are amazingly competent, they don’t have comprehension. Their building is guided by encoded mechanisms and not willful, intelligent reflection or design. Just as ants and many other organisms are competent without having comprehension, Dennett postulates that AI will never achieve comprehension, no matter how incredibly competent it becomes.

As we continue to watch AI technologies evolve, will we see signs that systems are beginning to understand how to apply reasoning? What DARPA calls contextual adaptation is a tall order, but if it’s ever achieved, the implications will be enormous.



Categories: Conversational Intelligence, Intelligent Assistants, Articles

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.