Next: A Real-Time Binocular Up: Sensor and Processor Previous: An Integrated Vision

Sensing for Mobile Robot Navigation

Authors: [tex2html_wrap4230]M.D. Levine, S.W. Zucker, P. MacKenzie, F. Gauthier, M. El-Gamal, F.P. Ferrie, G. Dudek, D. Jones

Investigator username: levine

Category: perception

Subcategory: sensor and processor design

In order to be able to navigate in complex environments, autonomous mobile robots (AMR) need to accomplish two functions using their sensors: identify known landmarks and avoid obstacles. In most cases, these tasks can be expressed within a three-dimensional geometrical framework. It follows that range-imaging sensors are the most appropriate types of sensors for this kind of application.

The navigation problems we are interested in solving in this context are those for which no trivially recognizable landmark is available. The level of difficulty encountered is analogous to the one we experience when driving a car to a remote location using only coarse indications of how to get there.

The range sensing system we are developing for these experiments is called ``QUADRIS'' because it uses two BIRIS sensors, a range-sensing technology developed at the Institute for Information Technology of the National Research Council in Ottawa. A single BIRIS sensor extracts two scene images from a single camera, and the offset between these images depends on the distance of the objects in the viewed scene, allowing the use of stereo vision to perform range-imaging. Our goal is to improve on the basic BIRIS system by supplementing it with additional active, as well as passive, stereo functionalities. This should provide our system with better resolution and longer range capabilities than can be accomplished with BIRIS alone.

QUADRIS is being designed to permit complex object recognition within a distance of 1 to 2 meters and also to achieve landmark/object localization at distances of up to 5 meters. It is hoped that this will permit the development and testing of sophisticated navigation algorithms that will operate in indoor environments. See Figure 14.


Next: A Real-Time Binocular Up: Sensor and Processor Previous: An Integrated Vision