We present an approach to vision-based mobile robot localization, even without an a-priori pose estimate. This is accomplished by learning a set of visual features called image-domain landmarks. The landmark learning mechanism is designed to be applicable to a wide range of environments. Each landmark is detected as a local extremum of a measure of uniqueness and represented by an appearance-based encoding. Localization is performed using a method that matches observed landmarks to learned prototypes and generates independent position estimates for each match. The independent estimates are then combined to obtain a final position estimate, with an associated uncertainty. Quantitative experimental evidence is presented that demonstrates that accurate pose estimates can be obtained, despite changes to the environment.