Our Research
Artificial intelligence is the study of intelligent behavior through sharp computational lenses: how can a machine do what humans do? This raises a number of very interesting questions that can be cast in a mathematical language and can be addressed computationally. One of the key questions in AI and Machine Learning today is about the integration of learning and reasoning. In AI, we have incredibly powerful deep learning systems that have revolutionized the field and have opened the door to a number of applications, but they are not reliable. This is because they rely exclusively on data; they are deep learners but shallow understanders. In AI, we also have a number of reasoning systems or solvers that excel at reasoning and planning, but which rely on models crafted by hand. This dichotomy between learning from data and reasoning with models pops up in a number of contexts. Why don’t we have self-driving cars on our streets? Because the systems that learn from data are not reliable, and the models needed to make sense of unanticipated situations, like a child chasing a balloon near the road, cannot be specified by hand. A key step for achieving an integration of learning and reasoning is indeed learning from data the models that are needed for reasoning.
This state of affairs in AI has a lot to do with what Nobel Prize winner Daniel Kahneman describes as System 1 and System 2 “thinking” in his book “Thinking, Fast and Slow”. In the book, System 1 refers to “intuitive intelligence”, fast, reactive, and effortless, and System 2 to “analytical intelligence”, slow, deliberative, and effortful. There is indeed a strong analogy between Kahneman’s Systems 1 and 2, on the one hand, and learners and reasoners in AI, on the other (Video, Paper). A crucial difference, though, is that our Systems 1 and 2 are tightly integrated, while learners and reasoners hardly talk to each other in AI. A central challenge in AI is to get learners and reasoners to inform, enhance, and complement each other. This overall goal guides the current research of the lab, which unfolds in the context of goal-directed behavior and planning (Video, Paper). The research can contribute to making AI systems that are more reliable, more transparent, and more adapted to human interaction.
Some Concrete Challenges
Data-based learners, like those based on deep learning, are popular because there is plenty of data available, yet they produce black boxes that lack the flexibility, transparency, and modularity of model-based systems. Building models by hand, on the other hand, is not possible in general. By showing how to learn meaningful, symbolic models from raw perceptions alone, the research is aimed at integrating the benefits of data-based learners and model-based solvers in the context of planning, where representations play a key role in expressing, communicating, achieving, and recognizing goals.
The problem of representation learning for planning is largely unsolved. Two characteristics of deep reinforcement learning, one of the main approaches for learning how to act, are its ability to deal with high-dimensional perceptual spaces without requiring any prior knowledge and its inability to reuse in a flexible and goal-directed manner what has been learned. The construction of reusable knowledge (transfer learning) is a central concern in (deep) reinforcement learning, but the semantic and conceptual gap between the low-level techniques that are used and the high-level representations that are required is too large.
For addressing this challenge, new ideas and methods are required that build on those of planning, deep and reinforcement learning, logic and knowledge representation, and combinatorial optimization. Our approach is a form of top-down representation learning based on a clear separation and characterization of what is to be learned from how. The research methodology that is common in deep learning, which focuses mainly on relative performance, is not good enough as it does not give us a crisp understanding. At the same time, we cannot afford not to use deep learning, namely, the optimization of parametric functions (neural nets) by means of stochastic gradient descent because deep learning represents a very useful, versatile, and effective class of solvers.
Some of the concrete open challenges in learning representations to act and plan are:
- Learning symbolic representations and their interpretation from raw perceptions (e.g., images)
- Learning hierarchical symbolic representations to enable planning at different levels of abstraction
- Learning representations of policies that generalize; i.e., that are not tied to particular state and action spaces
- Learning skills that can be reused and composed in a goal-directed manner
- Learning these various representations using combinatorial and gradient-based methods (deep learning)
- Understanding the theoretical properties of the representations required
More details about the ERC Project Representation Learning for Acting and Planning (2020-2025) can be found in the short and long version of the project proposals and in this AAAI 2022 paper, which is a sort of 1-year report.