This research is centered around the problem of identifying objects illuminated by daylight using the color information in digital images. The main difficulty lies in the wide illumination variations, which depend on weather conditions and time of day, and significantly alter the color response of the camera for a given single object. This poses a serious problem if color is to be used as a consistent means of identification independent of these variations. This is the well-known color constancy problem. Physics-based vision approaches have been applied with some success to solving this problem, but generally in the context of controlled or known illumination. As for daylight, learning approaches have by far been preferred, and so far no systematic attempt has been made to develop a physics-based method relying on the color formation equations. Obviously, outdoor illumination varies, but the question as to whether or not this can be modelled appears to have been overlooked in the computer vision literature. This does not do justice to the considerable amount of work done to characterize daylight, culminating in the semi-empirical model developed by Judd et al. . This project consists of two parts. The first is a model used to predict an object's color under daylight based on the color formation equations and the empirical model of Judd et al. Using the model one can predict regions in color space corresponding to measurements made by a specific television camera. The second part consists of a learning component that refines these initial predictions on the basis of a model determined by a training procedure. The main contributions of this work are first to provide a solid theoretical understanding of colour formation under daylight, and second to use this to arrive at a hybrid method conciliating the strengths of both learning and modeling. Finally, the fact that the method can be made autonomous constitutes a definite advantage over other learning approaches found in the literature.