Overview

This course provides an introduction to robotic systems from a computational perspective. A robot is regarded as an intelligent computer that can use sensors and act on the world. We will consider the definitional problems in robotics and look at how they are being solved in practice and by the research community. The emphasis is on algorithms, probabilistic reasoning, optimization, inference mechanisms, and behavior strategies, as opposed to electromechanical systems design. This course aims to help students improve their probabilistic modeling skills and instill the idea that a robot that explicitly accounts for its uncertainty works better than a robot that does not. In particular, we consider how can robots move and interact, what various body designs look like. Robotic sensors. Kinematics and inverse kinematics. Sensor data interpretation and sensor fusion. Path planning. Configuration spaces. Position estimation. Intelligent systems. Spatial mapping. Multi-agent systems. Applications.

Teaching Staff

Instructor: Professor Gregory Dudek
MC 417
Office Hours: Tuesday and Thursday 3-3:45pm, other times by appointment
Teaching Assistant: Sandeep Manjanna (contact info TBA)

Course Description

This course will broadly cover the following areas:

  • State space representations of the robot and it's environment.
  • Path planning. How to get from one place to another using deterministic and probabilistic methods, in low and high dimensional spaces.
  • Kinematics and Dynamics: how can we model robotic systems using approximate physical models that enable us to make predictions about how robots move in response to given commands?
  • Feedback Control and Planning: how can we compute the state-(in)dependent commands that can bring a robotic system from its current state to a desired state?
  • Mapping: how can we combine noisy measurements from sensors with the robot’s pose to build a map of the environment?
  • State Estimation: the state of the robot is not always directly measurable/observable. How can we determine the relative weighs of multiple sensor measurements in order to form an accurate estimate of the (hidden) state?
  • Intro to the Geometry of Computer Vision: how can modeling pixel projections on an RGB camera help us infer the 3D structure of the world? How can we triangulate points seen from two cameras? How can we estimate the camera’s pose (and therefore the robot’s) while it is moving in the environment?
  • Intro to Learning for robots: how can we learn the parameters of a robot controller? How can we directly map sensor data to actions?

Syllabus

Note that the lecture timing and sequence may drift slightly as the terms progresses as a function of student interests, emerging issues and other factors.
Lecture Date Topics Tutorial References Slides
Sept 5 Introduction
Motivation, logistics, scope of the field and the course, sense-plan-act paradigm.
Quiz 0 (Introduction, Background, Expectations)
Dudek & Jenkin Ch. 1
Aggressive UAV flight
Slides: intro] (part 1 of 2)
[history] (part 2 of 2)
Sept 7 Intro to Planning
Properties, definitions, configuration space, deterministic methods.
Lavalle Ch. 4
Dudek & Jenkin Ch. 6
Planning part 1
Sept 12 Deterministic planning methods. Lavalle Ch. 8.4
Dudek & Jenkin 6.3.4
Optional: Howie Choset's notes
Planning part 1 (continued)
Sept 14 Planning
Implementation issues, navigation functions. Rapidly-exploring Random Trees (RRT), Probabilistic RoadMaps (PRM) Artificial Potential Fields and Obstacle Avoidance
Lavalle 5.5, 5.6
Sept 19 (guest lecture) Vehcile design, Kinematics and cooridate systems
Frames of reference. Rotation representations. Homogeneous coordinates and transformations. Rigid body motion.
Special topics: emerging research guest presentations.
Intro to ROS Paul Furgale: robot pose
Sept 21 (guest lecture) Into to dynamics
Dynamical systems and control. Examples: Dubins car, differential drive car, unicycle, pendulum, cartpole, quadcopter. Holonomic vs. non-holonomic systems.
Special topics: emerging research guest presentations.
Lavalle 13.1
Dudek & Jenkin 3.1.5,6
Kumar TED talk (again!)
Introduction to Control
Tuning, PID, advantages and drawbacks.
Quiz 1 (Planning and PID)
Linear algebra refresher Optional: Astrom and Hagglund, Ch. 2
Sensors and Actuators
Observation models for the following sensors: cameras, lasers, tactile, IMU, depth, GPS, Hall-effect, encoders, RGBD. Pulse-Width Modulation.
Dudek & Jenkin 3.1.1,4, 3.2-3, 4.1-8, 4.10, 5.1.1
Optional: Mike Langer's notes
State estimation
How to compute one's position. Worst-case analysis, practical methods, mtulti-sensor methods. Learning-based methods.
Pieter Abbeel's notes
Map Representations and Map Alignment
Occupancy grids, Octrees, Voronoi Graph, Homotopy Classes. Map alignment with known or unknown correspondences. Iterative Closest Point (ICP).
Pieter Abbeel's notes
Occupancy Grid Mapping With Known Robot Poses
Log-odds ratio, Probabilistic dynamics and measurement models, Bayesian estimation.
Intro to numpy Pieter Abbeel's notes
Probabilistic Robotics Ch. 2 and Ch. 9
Maximum Likelihood, Least Squares Estimation, Maximum A Posteriori Estimation
Least squares as a special case of maximum likelihood estimation on Gaussian models.
Quiz 2 (Potential fields, maps and sensor types)
Optional: Simon Prince Ch.2 and Ch. 4
SLAM: mapping and location estimation. SLAM.
GraphSLAM. Expectation and Covariance. Geometric interpretation of the covariance matrix. Nonlinear Least Squares formulation of the Simultaneous Localization And Mapping (SLAM) problem.
Udacity Lesson 6
Probabilistic Robotics Ch. 11
Midterm
Midterm review session
The Kalman Filter
Bayes' rule on Gaussian distributions. Example of 1D Kalman Filter.
Udacity Lesson 2
Kalman Filter, Illustrated
Bayes' Filter and Kalman Filter
Kalman Filter as a special case of Bayes' Filter. Examples of 2D and 4D Kalman Filter. General prediction and update equations.
Probabilistic Robotics Ch. 2,3
Extended Kalman Filter (EKF)
Bayes' Filter and nonlinear transformations. Monte Carlo sampling vs. Linearization. EKF prediction and update equations. Examples: EKF Localization and EKF SLAM.
Cyrill Stachniss' intro to EKF
Cyrill Stachniss' intro to EKF-SLAM
Probabilistic Robotics Ch. 2,3
Particle Filter
Representing multimodal distributions. Particle propagation and resampling. Pathologies of particle filter.
Quiz 3 (SLAM, the KF and Bayes' rule)
Udacity Lesson 3
Particle Filter
Importance Sampling. Examples: Markov localization in a known map. FastSLAM.
Optional: Thrun's paper on PF
Camera Optics and Multi-view Geometry
Pinhole cameras, lenses, perspective projection. Aperture, focal length, exposure time, depth-of-field. Structure from Motion.
Optional: James Tompkin's notes
Learning-based action
Using learned perceptual models to expore or collect data.

Visual odometry and Visual SLAM
Epipolar constraints. Depth from stereo disparity for parallel cameras. Triangulation as a least-squares problem. Scale issues in visual odometry with a single camera. Visual SLAM vs. structure from motion.
Quiz 4 (Particle filtering and vision)
Optional: James Tompkin's notes on stereo and SfM.
Sanja Fidler's notes on depth from stereo
Learning robot controllers
Function Approximation. Intro to Reinforcement Learning
Model-free RL: policy gradient estimation and the cross-entropy method.
Markov Localization Optional: Pieter Abbeel's policy optimization notes
Intro to Reinforcement Learning (invited talk by Juan Camilo Gamboa Higuera)
Research highlights (non-examinable material). Model-based reinforcement learning. Learning to swim on the Aqua robot.
pdf, pptx
Human-Robot Interaction
Modeling human trust, and trust-aware control.
Assignment 4 Discussion pdf, pptx
Self-driving cars and other emerging applications
Algorithms, legal and social issues.
pdf, pptx
Review session for final exam

Assignments

The following are the expected assignment topics outlined here at the start of term as a guideline. Final assignent specifications may change.
  • Finding your way out of a maze, implemented using ROS.
  • Deterministic and probabilistic planning using variants of A* and RRT's.
  • Mapping and localization (SLAM).
  • Autonomous driving: vision-based car-like navigation using simulated cameras.

Marking scheme

  • 4 assignments worth 10% each = 40%
  • 4 quizzes worth 1.25% each = 5%
  • 1 midterm exam worth 15%
  • 1 final exam worth 40%
  • The final exam grade can replace the midterm grade if it improves the student's final mark.

Textbook

Computational Principles of Mobile Robotics, by Dudek and Jenkin. Cambridge University Press, 2010.
Selected readings from the research literature, to be distributed in class.

Supplementary reference material

  • Jorge Angeles, Fundamentals of robotic mechanical systems: theory, methods, and algorithms New York, Springer, 2003.
  • Lung-Wen Tsai, Robot analysis: the mechanics of serial and parallel manipulators, New York, Wiley, 1999.
  • J.P. Merlet, Parallel robots, Boston, MA : Kluwer Academic Publishers, 2000.
  • Probabilistic Robotics, by Thrun, Fox, and Burgard, The MIT Press, 2005.
  • Planning Algorithms, by Steven Lavalle, Cambridge University Press, 2006.
  • Robotics, Vision, and Control, by Corke
  • Introduction to Autonomous Mobile Robots, by Siegwart, Nourbakhsh, Scaramuzza
  • (Chapters 2 and 4 from) Computer Vision: Models, Learning, and Inference, by Prince

Evaluation

The details of the course evaluation scheme and format of some classes will depend of the enrollment and hence will not be fixed until after the first lecture (based on attendance and student mix in the first lecture). Evaluation will be based on three types of activity: class participation, independent work (homework/project), and a possible in-class formal presentation. Based on substantial enrollment in 2017, the in-class presentations may not be possible.

The evaluation for the course is to be based on a combination of assignment, exam, midterm and other elements as discussed in class and as posted.

Technicalities to note

Senate on January 29, 2003 approved the following resolution on academic integrity, which requires that a reminder to students be printed on every course outline:

Whereas, McGill University values academic integrity; Whereas, every term, there are new students who register for the first time at McGill and who need to be informed about academic integrity; Whereas, it is beneficial to remind returning students about academic integrity;

Be it resolved that instructors include the following statement on all course outlines:

McGill University values academic integrity. Therefore all students must understand the meaning and consequences of cheating, plagiarism and other academic offences under the Code of Student Conduct and Disciplinary Procedures (see www.mcgill.ca/integrity for more information).

Be it further resolved that failure by an instructor to include a statement about academic integrity on a course outline shall not constitute an excuse by a student for violating the Code of Student Conduct and Disciplinary Procedures.

dudek@cim.mcgill.ca