Uncertainty in Robotics
This course explores the core challenges of mobile robotics, focusing on navigating uncertainty. We begin with probability theory fundamentals, essential for state estimation and mapping. Students will explore localization techniques, including Markov and Monte Carlo methods, and tackle Simultaneous Localization and Mapping (SLAM). The course then transitions to decision-making under uncertainty, covering Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs), and studies reasoning about actions in unpredictable environments.
Content (preliminary)
- Introduction to Mobile Robotics
- Basics of Probability Theory
- State Estimation
- Mapping
- Markov Localization
- Monte Carlo Localization
- Simultaneous Localization and Mapping (SLAM)
- Markov Decision Processes (MDPs)
- Partially observable Markov Decision Processes (POMDPs)
- Reasoning about action under uncertainty
Recommended prior knowledge
Basic knowledge of probability theory.
Recommended reading
S. Thrun, W. Burgard, and D. Fox, Probabilistic robotics. MIT Press, 2005.
Additional resources
Questions?
If you have further questions, please contact Till Hofmann.