MRL

Completed and Dormant Projects


  • Optimal Spiral Search

    Investigators: Scott Burlington, G. Dudek

    This project involved the application of spiral search techniques to mobile robot navigation, and multi-agent coordination. If you are stuck in a long dark hallway and need to find the light switch as quick as you can you should probably choose your turning points according to f(i),

    where m=2 and i is the number of turns. This scales to planar surfaces and has many synchonization issues when applied to more than one searcher. Scott is now gainfully employed and the project is waiting for the right person to take it up again.

  • Defining Islands of Reliability for Exploration and Hybrid Topological-Metric Map Construction

    Investigators: Saul Simhon, G. Dudek.

    We are interested in the definition and detection of landmarks and local reference frames in a large-scale environment. We are examining automatic methods for generation coupled navigation and sensing algorithms that are generalize across specific sensing technologies such as vision and sonar.
    These landmarks and reference frames are used to construct a hybrid topological metric map. The representation consists of local metric maps connected together to form a graph. Each local map is considered a node in the graph and the edges of the graph qualitatively describe the hierarchy and relationship of neighbouring nodes. The work is inspired by biological environment perception.

  • Probabilistic sonar understanding

    Investigators:Simon Lacroix, Gregory Dudek

  • Pose Estimation From Image Data Without Explicit Object Models

    Investigators: G. Dudek, Chi Zhang

    We consider the problem of locating a robot in an initially-unfamiliar environment from visual input. The robot is not given a map of the environment, but it does have access to a limited set of training examples each of which specifies the video image observed when the robot is at a particular location and orientation. Such data might be acquired using dead reckoning the first time the robot entered an unfamiliar region (using some simple mechanism such as sonar to avoid collisions). In this paper, we address a specific variant of this problem for experimental and expository purposes: how to estimate a robot's orientation(pan and tilt) from sensor data.

    Performing the requisite scene reconstruction needed to construct a metric map of the environment using only video images is difficult. We avoid this by using an approach in which the robot learns to convert a set of image measurements into a representation of its pose (position and orientation). This provides a {\em local} metric description of the robot's relationship to a portion of a larger environment. A large-scale map might then be constructed from a collection of such local maps. In the case of our experiment, these maps express the statistical relationship between the image measurements and camera pose. The conversion from visual data to camera pose is implemented using multi-layer neural network that is trained using backpropagation. For extended environments, a separate network can be trained for each local region. The experimental data reported in this paper for orientation information (pan and tilt) suggests the accuracy of the technique is good while the on-line computational cost is very low.

    Related work is taking place in the context of the IRIS project (below). A recent article appears in Neural COmputation and the abstract is available (externally) here.

  • Spatial abstraction and mapping

    Investigators: P. Mackenzie, G. Dudek

    This project involves the development of a formalism and methodology for making the transition from raw noisy sensor data collected by a roving robot to a map composed of object models and finally to a simple abstract map described in terms of discrete places of interest. An important early stage of such processing the the ability to select, represent and find a discrete set of places of interest or landmarks that will make up a map. Associated problems are those of using an map to accurately localize a mobile robot and generating intelligent exploration plans to verify and elaborate a map. Click here for a compressed postscript copy of a paper on this work.

  • Spatial Mapping with Uncertain Data

    Investigator: G. Dudek

    As a sensor-based mobile robot explores an unknown environment it collects percepts about the world it is in. These percepts may be ambiguous individually but as a collection they provide strong constraints on the topology of the environment. Appropriate exploration strategies and representations allow a limited set of possible world models to be considered as maps of the environment. The structure of the real world and the exploration method used specify the reliability the final map and the computational and perceptual complexity of constructing it. Computational tools being used to construct a map from uncertain data range from graph-theoretic to connectionist.

  • Human object recognition and shape integration

    Investigators: Gregory Dudek, Daniel Bub(Neurolinguistics, Montreal Neurological Inst.), Martin Arguin (Psychology Dept., University of Montreal)

    Computational vision is defined, to a large extent, with reference to the visual abilities of humans. In this project we are examining the relationship between the characteristics of object shape and the abilities of humans to recognize these shapes. This includes the modelling of subjects with object recognition deficits due to brain damage as well as normal subjects. Click here for a compressed postscript copy of a recent paper on this work.

  • Dynamic reasoning, navigation and sensing for mobile robots

    IRIS Project IS-5

    Investigators: Martin D. Levine, Peter Caines, Renato DeMori, Gregory Dudek, Paul Freedman (CRIM), Geoffrey Hinton (University of Toronto)

    The goal of this project is to develop both the theoretical basis and practical instantiation of a mobile robotic system will be able to reason about tasks, recognize objects in its environment, map its environment, understand voice commands, and navigate through the environment and perform the specified search tasks. This will be achieved in a dynamic environment, in that knowledge of a (possibly changing) world may be updated, and the tasks themselves may be radically altered during the system's operation. Core research areas involved include perceptual modelling, control theory, neural networks, graph theory, attentive control of processing and speech understanding. Among the key capabilities indended as outcomes of this project are:
    • Integrated low (eg, points and lines) and high level (eg. places and rooms) descriptions of the environment.
    • Ability to deal with a changing environment.
    • Ability to reason about multiple tasks and the changing environment.
    • Ability to learn about the environment and the sensor characteristics.
    • Ability to accept high level verbal commands (with a limited lexicon and

    syntax) similar to those employed by humans (based on psychological data) and translate them into control actions for the robot and sensors.

  • Natural language referring expressions in a person/machine dialogue

    Investigators: G. Dudek, R. DeMori, C. Pateras

    Click here for abstract

  • Reliable Vehicle Trajectory Planning

    Investigators: G. Dudek, Chi Zhang

    We are using a hybrid method for vehicle path planning that guarantees globally acceptable solutions yet has limit time and space complexity. This depends on a combination of variational methods with other approaches.

  • Enhanced reality for mobile robotics

    Investigators: Kadima Lonji, G. Dudek

    This project involves the use of a synthetic scene model for teleoperation or pose estimation. Live video and synthetic model information is fused to produce a composite image.

  • Multi-sensor fusion for mobile robotics

    Investigators: MRL group members

    Click here for abstract (with picture)

  • A Flexible Behavioural Architecture for Mobile Robot Navigation

    Investigators: J. Zelek, M. D. Levine

    The intention of this study is to design an architecture that allows the behavioral control strategy that is flexible, generalizable, and extendable. The component dedicated to behavioral activities should be able to attempt tasks with or without a reasoning module. We are investigating 2D navigational tasks for a mobile robot possessing sonar sensors and a controllable TV camera mounted on a pan-tilt head. The major aspects of our proposed behavioral architecture are as follows: - A natural language lexicon is used to represent spatial information and for defining task commands. The lexicon is used as a language for internal communications and user-specified commands. The task is to go to a location in space, either known or determined by locating a specific object. - An extension of a formalism, referred to as teleo-reactive (T-R) programs (Nilsson:94), is used for specifying behavioral control. The extensions of this approach involve dealing with real-time resource limitations and constraints. Some movies of this project in action can be viewed here: