Overview

This course provides an introduction to robotic systems from a computational perspective. A robot is regarded as an intelligent computer that can use sensors and act on the world. We will consider the definitional problems in robotics and look at how they are being solved in practice and by the research community. The emphasis is on algorithms, probabilistic reasoning, optimization, inference mechanisms, and behavior strategies, as opposed to electromechanical systems design. This course aims to help students improve their probabilistic modeling skills and instill the idea that a robot that explicitly accounts for its uncertainty works better than a robot that does not. In particular, we consider how can robots move and interact, what various body designs look like. Robotic sensors. Kinematics and inverse kinematics. Sensor data interpretation and sensor fusion. Path planning. Configuration spaces. Position estimation. Intelligent systems. Spatial mapping. Multi-agent systems. Applications.
Lecture location: Stewart Biology Building STBIO S1/3, Tuesday and Thursday 4:05 pm-5:25 pm

Teaching Staff

Instructor: Professor Gregory Dudek (dudek@cim.mcgill.ca) and Dr. Faraz Lotfi (f.lotfi@cim.mcgill.ca)
STBIO S1/3
Office Hours: Tuesday and Thursday 5:25pm-5:55pm in the class, other times by appointment
Zoom link http://mcgill.zoom.us/my/gdudek
Teaching Assistant(s): TDB
Office Hours and location: TBD.
Zoom links will be posted on mycourses.

Course Description

This course will broadly cover the following areas:

  • State space representations of the robot and it's environment.
  • Path planning. How to get from one place to another using deterministic and probabilistic methods, in low and high dimensional spaces.
  • Kinematics and Dynamics: how can we model robotic systems using approximate physical models that enable us to make predictions about how robots move in response to given commands?
  • Feedback Control and Planning: how can we compute the state-(in)dependent commands that can bring a robotic system from its current state to a desired state?
  • Mapping: how can we combine noisy measurements from sensors with the robot’s pose to build a map of the environment?
  • State Estimation: the state of the robot is not always directly measurable/observable. How can we determine the relative weighs of multiple sensor measurements in order to form an accurate estimate of the (hidden) state?
  • Intro to the Geometry of Computer Vision: how can modeling pixel projections on an RGB camera help us infer the 3D structure of the world? How can we triangulate points seen from two cameras? How can we estimate the camera’s pose (and therefore the robot’s) while it is moving in the environment?
  • Intro to Learning for robots: how can we learn the parameters of a robot controller? How can we directly map sensor data to actions?

Assignments

Syllabus for Fall 2023

Note that the lecture timing and sequence may drift slightly as the terms progresses as a function of student interests, emerging issues and other factors. Dates have not yet been updated to account for study break.
TBD