This page provides brief descriptions of some projects at the McGill Mobile Robotics Lab, as well as links to further information. Please also visit our publications page if you're looking for something you can't find here.
The projects mainly involve aspects of environment or shape understanding using sensor data.
Investigators: G. Dudek, F. Ferrie, R. Kruk, I. Christie.
We are developing tools and techniques for inferring environment structure from a network of widely separated range sensors.
Project AQUA is a joint research project between Dalhousie University, York University, and McGill University involving the development of an autonomous underwater hexapod robot. Several McGill projects relate to AQUA including the design and development of the vehicle itself, the swimming mechanism, the vision-based localization methods and 3-D inference from underwater monocular images.
We are currently working on a recommendation system which allows users to specify not only which items were enjoyed, and which were not, but also to specify which features were important to the decision. We believe that having knowledge of the reasoning behind preferences will allow the system to better predict future preferences. Your can try the prototype system at http://www.recommendz.com or this alternative link to the recommender system for movies
We are currently working on inferring complete range maps where only video images and very limited range data is available. We allow a mobile robot to rapidly collect a set of video images and very limited amount of range data from the environment and then infer the rest of the map it does not capture directly. Our goal is to facilitate the building of 3D environment models by exploiting the fact that both video imaging and limited range sensing are ubiquitous readily-available technologies while complete volume scanning is prohibitive on most mobile plataforms.
Sandra Polifroni is currently doing research on interest operators and human preattentive vision. Her work uses both psychophysics and computer science to qualitatively evaluate the performance of interest operators relative human vision.
Appearance based recognition using Principal Components Analysis with the added ability to account for varying backgrounds. This is done using an attention operator to focus on the object to be recognised and performing PCA only on the sub-windows within the object.
Several members of the mobile robotics group are assembling components of our software infrastructure into a real-time mobile robotics testbed.
Richard Unger and Francois Belair have done some interesting work in projects for their preparatory courses. The Java Applets demonstrate some concepts in Computational Geometry and their applications to Robotics. Warning: The robotics project applet is known to imcompatible with some browsers.
A Distributed, device independent mobile robot controller and simulator. It supports distributed computation and visualization and can control one or more real Nomad or RWI robots. A beta version and some additional details are available on the project page.
This project deals with the inference of environmental structure from shadow information.
This project deals with the exploration of an unknown environment using two or more robots working together. Key aspects of the problems coordination, and particularly rendezvous, between the robots, and efficient decomposition of the exploration task.
This project involves shape modelling based on a combination of local curvature information at multiple scale, and global superquadric surface fitting.
This research investigates the combined use of a sonar range finder and a laser range finder (QUADRIS or BIRIS) for exploring a structured indoor environment. The methodology is called "just-in-time" sensing.
We are examining techniques for the creation and management of virtual reality analogues for the real world. This includes the automatic acquisition of image-based VR images, as well as the automated selection of viewpoints and scenes of interest. Further information on the image acquisition system is available by following the link.
Methods for learning, encoding, detecting, and using visual landmarks for mobile robot pose estimation.
We are interested in elaborating a taxonomy for systems of multiple mobile robots. The specific issues we are foc using on are the relationships between inter-robot communication, sensing, and coordination of behaviour in the context of position estimation and exploration.
Autonomous navigation using sensory information often depends on a usable map of the environment. This work deals with the automatic creation of such a maps by an autonomous agent and the minimal requirements such a map must satisfy in order to be useful. One aspect of this work is the analysis of how uncertainty either in the map or in sensing devices relates to the reliability and cost of navigation and and path planning. Another aspect is the development of sensing strategies and behaviours that facilitate reliable self-location and map construction.
Note: Couldn't find it? What you're looking for might be on our older projects page, or among our publications.