Uncertainty in Robotics

This course explores the core challenges of mobile robotics, focusing on navigating uncertainty. We begin with probability theory fundamentals, essential for state estimation and mapping. Students will explore localization techniques, including Markov and Monte Carlo methods, and tackle Simultaneous Localization and Mapping (SLAM). The course then transitions to decision-making under uncertainty, covering Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs), and studies reasoning about actions in unpredictable environments.

Content (preliminary)

  1. Introduction to Mobile Robotics
  2. Basics of Probability Theory
  3. State Estimation
  4. Mapping
  5. Markov Localization
  6. Monte Carlo Localization
  7. Simultaneous Localization and Mapping (SLAM)
  8. Markov Decision Processes (MDPs)
  9. Partially observable Markov Decision Processes (POMDPs)
  10. Reasoning about action under uncertainty

Basic knowledge of probability theory.

S. Thrun, W. Burgard, and D. Fox, Probabilistic robotics. MIT Press, 2005.

Additional resources

Questions?

If you have further questions, please contact Till Hofmann.