IBM Research & MIT Roundtable: Solving AI’s Big Challenges Requires a Hybrid Approach

By Larry Greenemeier

At IBM Research’s recent “The Path to More Flexible AI” virtual roundtable, a panel of MIT and IBM experts discussed some of the biggest obstacles they face in developing artificial intelligence that can perform optimally in real-world situations.

The solution, they agreed during the July 8 panel, is to embrace an integrated AI paradigm that amplifies the strengths and compensates for the weaknesses found in different approaches, including symbolic programming and deep learning.

Watch the IBM Research & MIT roundtable.

AI and automation are largely synonymous when you talk about industrial uses, said panelist David Cox, IBM Director of the MIT-IBM Watson AI Lab. “A lot of what people mean when they talk about AI today is automation,” he added. “But automation is incredibly labor-intensive today, in a way that really just doesn’t work for the problems we want to solve.”

To leverage tools like machine learning and deep learning, “you need to have huge amounts of carefully curated and bias-balanced data to be able to use them well,” Cox said. “And for the vast majority of the problems we face, actually, we don’t have those giant rivers of data. Most of the hard problems we have in the world that we’d love to solve with automation, with AI, we don’t really have the right tools for that.”

Machine learning is good at problems that require the interpretation of signals—such as image recognition—but the training process requires a lot of data and computing power, agreed panelist Leslie Kaelbling, an MIT Professor of Computer Science and Engineering.

“For years people tried to directly solve problems such as finding faces in images, and directly engineering those solutions didn’t work at all,” Kaelbling said. “Instead, it turns out we’re much better at engineering algorithms that can take that data, and from the data derive a solution. For some problems, however, we don’t have the formulations yet that would let us learn from the amount of data we have available. So we really have to focus on learning from smaller amounts of data.”

Neuro-Symbolic and Other Hybrid Approaches

One way to find value in smaller data sets is to leverage a combination of AI approaches, the panelists agreed. Neuro-symbolic AI is one such hybrid method. Symbols were the original approach to AI, where programmers would codify knowledge, said panelist Josh Tenenbaum, an MIT Professor of Computational Cognitive Science. But that approach did not scale, he said, nor are end-to-end neural networks the answer, given the amount of data and computing power that would involve.

In one common approach to neuro-symbolic, “you take a problem where your basic knowledge is expressed in symbolic terms, but you actually find a way to train a neural network to learn to guide your search through that space,” Tenenbaum said. “I wouldn’t think of it as extending deep learning but rather using deep learning—which is good at functional approximation and pattern recognition—and realizing that these hard search problems in a sense can be turned into pattern recognition and function approximation problems.”

If you can create code that represents your knowledge of something, that is more powerful for logical reasoning than machine learning, Tenenbaum said. Sometimes inferences are not necessarily true or false, yes or no. Then AI must consider the probability that an answer is one or the other, without being entirely certain.

Probabilistic reasoning over symbolic code has been another important development in the recent history of AI, Tenenbaum said.  But what if you can’t write down the knowledge that supports reasoning in code? Then it could be learned using neural networks or neuro-symbolic methods, he said. One of the biggest benefits of neuro-symbolic systems is that they learn using much less data than neural networks alone require. When businesses lack large amounts of data, these systems can be trained to do one-shot learning, using symbolic knowledge and probabilistic reasoning to fill in the gaps of the data.

Tenenbaum also pointed out that probabilistic programming—the synthesis of probabilistic inference and symbolic representation—is increasingly being combined with neural networks for a hybrid approach to AI. In other cases, “knowledge can be written down in code, but it’s not a human who writes it down,” he said. “There’s a field called program synthesis where you have algorithms that write little chunks of code.”

He cited work from Armando Solar-Lezama, Associate Director and COO of the MIT Computer Science & Artificial Intelligence Laboratory, who has been working to combine machine learning and probabilistic inference tools with programs that write programs. “You put all of that together, and you have a much more powerful, broad toolset that can take the power of symbolic knowledge and make it much more scalable and usable in the real world,” Tenenbaum said

Virtual Blended AI Demonstrations

The virtual roundtable, moderated by David Schubmehl, IDC Research Director of Cognitive and AI Systems, also featured demonstrations of both MIT and MIT-IBM Watson AI Lab projects. In one demo, Kaelbling discussed how she and her team enabled a robotic arm to perform new tasks through a combination of programming and machine learning.

The robot had been programmed, for example, to pick up and put down objects, and it used machine learning to learn how to pour liquids. From that programming and learning, the arm was able to determine the conditions under which pouring would work effectively. In one case, that meant pushing a bowl from one spot on the table to another so that it was in range for the pour, something it had not been programmed or taught to do.

Such flexibility could have real-world implications. “If a robot came into your kitchen, you’d already want it to know quite a lot,” Kaelbling said. “But you’d also like to be able to teach it some new skill that it might not have known before,” she said. “You would need to take that skill and integrate it with what the robot already knows quite quickly.”

In another demo, Cox demonstrated the new IBM Research Verifiably Safe Reinforcement Learning experiment, which represents a paradigm shift in reinforcement learning with the combination of symbolic reasoning.

“One of the things we’re working on,’’ Cox said, “is, are there ways to use these formal symbolic software verification methods in combination with reinforcement learning to build systems that can be verifiably safe?” The experiment’s use case was a delivery agent or drone navigating a customer's yard to deliver a package. The point of the demo was to show how formal software verification efforts could be combined with reinforcement learning to enable the drones to operate safely, without a lot of trial and error that would endanger people in their path.

The COVID-19 Factor

The panelists were asked how the ongoing COVID-19 pandemic has impacted AI research. In general, the pandemic introduced a lot of unforeseen challenges that have “broken a lot of models,” Cox said.

An AI system that, for example, might have been designed prior to the pandemic to better understand whether people who eat at fancy restaurants also shop at fancy grocery stores would have been upended. For a while, very few people were going to restaurants of any type. The same would be true of an algorithm designed last year to predict demand for N95 face masks in 2020. The pandemic’s unexpected and often unpredictable impact on society highlights the need for resilience in AI systems.

The pandemic shows a need for a more robust approach to understanding the world when it comes to creating AI, Tenenbaum said. That requires model building, not just large amounts of data that may or may not be available.

The pandemic has also taught the AI research community the value of virtual conferences, something that was rarely considered before the current travel restrictions. Even if conferences go back to being large physical gatherings, the researchers agreed that virtual conferences will not go away, having made it much easier for more people around the world to access and contribute to important discussions, which will have a lasting positive impact on the field moving forward.