G. Sela, M.D. Levine Real-time computer vision systems are burdened by extremely large amounts of data which must be processed in a small amount of time. To this end, an attentional process allowing computational resources to be concentrated on salient regions in an image would allow scenes to be processed faster and more efficiently. Further data reduction could be achieved through the use of sensors with nonuniform sampling resolutions. Such sensors, modelled after the primate visual system, incorporate a central high resolution foveal region surrounded by a much coarser peripheral region whose resolution decreases with the distance from the centre. For these sensors, an attentional mechanism is necessary to ensure that the interesting parts of a scene fall within the fovea in order to be analysed at the highest resolution possible. This research involves the development and implementation of a real-time general purpose, context-free algorithm to determine interest points in a scene. The objective is to continuously position a robot-mounted, foveated sensor so that the interesting visual areas lie within the foveal region. Based on psychophysical experiments of human gaze fixation, the algorithm models interest points as the intersection of lines of symmetry between edges in an image. A novel, real-time method of computing these interest points is achieved by adopting a symmetry measure based on the loci of centres of cocircular edges. Computation of these interest points is described for both uniformly mapped foveal images and non uniform, log-polar peripheral images. The algorithm has been implemented on a parallel network of Texas Instruments TMS320C40 (C40) processors. By processing foveal and peripheral data in parallel, this configuration allows the algorithm to perform in near real-time on images covering a very large field of view. Experimental results have shown that the algorithm has a variety of applications beyond visual attention for autonomous robots. For example, its stability over large changes in object scale, orientation and position makes it applicable to object tracking and recognition tasks. As well, the algorithm is particularly well-suited to finding the location of human facial features over a large range of scales and poses in real-world situations. This makes it suitable for use as a fast front-end processor for face recognition programs.