Friedman, Daniel J.; Clark, James J.; AA(Harvard Univ.) Publication: Proc. SPIE Vol. 1614, p. 99-110, Optics, Illumination, and Image Sensing for Machine Vision VI, Donald J. Svetkoff; Ed.
Publication Date: 3/1992
A CMOS circuit has been developed which integrates sensors and processing circuitry aimed at implementing autonomous robot perception and control functions. The sensors are an array of photodetectors and the processing circuitry analyzes the array data to extract a basic set of sensory primitives. In addition, the processing circuitry provides a low-resolution determination of the location of any brightness edges which cross the array. Ultimately, this sensor-processor circuitry will be used as part of an overall integrated sensorimotor system for autonomous robots. In the complete system, individual sensorimotor units will produce motion requests for the robot as a whole and an operating system, serving in part as a motion request handler, will arbitrate among suggested motions. The nature of the motion requests will be dependent both on sensor input and on the current goals of the robot. Ideally, the entire set of sensors, the processing circuitry, and the operating system will reside on a single VLSI chip. The current chip achieves many of the objectives of the complete integrated sensorimotor system, namely, it acquires sensory information, manipulates that data, and ultimately provides a digital output signal set which could serve as a motor signal set. Much of the on- chip processing is done by sensory primitive modules which calculate spatial convolutions of the sensor array data. Convolution kernels which were actually implemented were chosen based primarily on their usefulness in solving low-level vision problems. Specific kernels on the current chip include discrete approximations to the x-direction first derivative operator, the y-direction first derivative operator, and the laplacian operator. The spatial convolution function is achieved using current mode analog signal processing techniques. The output of the spatial convolution modules is piped into a higher level module which generates an estimate of the location of brightness edges which cross the array. This location estimate, which takes the form of a set of digital signals, can be readily translated into a (motor system dependent) motion request format, if indeed it cannot be used directly for this purpose. Location estimation, although it is the only higher level function implemented on the current chip, is just one example of a useful sensory primitive-based function. Additional higher level modules could be used to implement alternate functions which estimate other important environmental properties.
©2002 SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.