Authors: [tex2html_wrap4158]F. Ferrie
Investigator username: ferrie
Subcategory: active perception
The projects described in the following section are part of a research effort investigating the problem of how to build three-dimensional descriptions of shape from information collected by multiple sensors, at different time instances, and at different positions within the environment of a robot. The principal objectives of this work are: i) to understand how basic inferences about shape (e.g. surfaces) are drawn from sensor data, ii) how the geometric structure of an object is encoded in its surfaces, iii) how generic models (e.g. volumetric, part-oriented) are computed from shape inferences and surface measurements, and iv) how an autonomous agent can build a description of a scene based on such models using only general purpose information. The resulting models are subsequently used for purposes of object manipulation and recognition.
A key element of this research is that it addresses the problem of determining how an automonous agent (e.g. a mobile sensor) uses the information collected or inferred at one location to plan subsequent trajectories for gathering further information. By understanding the exact nature of how inferences at different levels of abstraction are perturbed by measurements, one can then prescribe where (new vantage points) and how (specific sensors and modalities) to make additional measurements so that the overall ambiguity is reduced. This is somewhat related to the idea of active perception, but at a more global level.
We have completed the first implementation of our autonomous exploration system and have successfully demonstrated the automatic generation of a complete scene description based on a sequence of exploratory probes. Work is currently in progress to extend this research to other sensor modalities as well as adapt the system for use on a mobile robot.