Inferring Differential Structure from Defocused Images

We consider the problem of estimating depth, normal, principal curvatures and their directions at every pixel from two defocused images. Existing literature on depth from defocus (DFD) only considers the problem of depth estimation. On the other hand, surface inference methods usually work with volumetric or pointcloud data. In this project we explore how these two independent techniques perform when they are combined together. Since depth from defocus works with single perspective images, we ensure that single perspective visibility property is maintained throughout the process. Our basic approach is to estimate depth using standard depth from defocus methods and then fit a quadric at every pixel, and finally iteratively refine the local quadrics to infer scene surface properties at every pixel in the image. We took Sander and Zucker's relaxation labelling based approach and modified it for our problem. The proposed approach is evaluated quantitatively using synthetic data and qualitatively using real defocused images.

Paper

Slides