Bypassing Geometry Acquisition: Depth Edges for Image Analysis, Rendering, and Interaction

Rogerio Feris, Matthew Turk

(In cooperation with Ramesh Raskar, MERL)


We describe a framework for capturing depth discontinuities (aka depth edges) in real-world scenes, based on the variation of imaging parameters, and demonstrate the usefulness of our methods in image analysis, rendering, and interactive applications.


Sharp discontinuities in depth, or depth edges,  are directly related to the 3D scene geometry and provide extremely important low-level features for image understanding, since they tend to outline the boundaries of objects in the scene. In fact, they comprise one of the four components in the well-known 2 1/2-D sketch of Marr’s computational vision model. Reliable detection of depth edges clearly facilitates segmentation, establishes depth-order relations, and provides valuable features for visual recognition, tracking, and 3D reconstruction. It can also be used for camera control (to help revealing new surfaces), and non-photorealistic rendering.

Most previous approaches proposed for detection of depth discontinuities treat them as an annoyance, rather than as a positive source of information. The reason is that the majority of 3D reconstruction methods produce inaccurate results near depth discontinuities, due to occlusions and the violation of smoothness constraints. Recently, steady progress has been made in discontinuity preserving stereo matching, mainly with global optimization algorithms based on belief propagation or graph cuts. However, these methods fail to capture depth edges associated with sufficiently small changes in depth. Moreover, obtaining clean, non-jagged contours along shape boundaries is still a challenging problem even for methods that rely on more expensive hardware.

In this project, we propose a novel framework for reliable extraction of depth edges. As part of this framework, we describe a method that is based on a simple and inexpensive modification of the capture setup: a multi-flash camera is used with flashes strategically positioned to cast shadows along depth discontinuities, allowing efficient and accurate shape extraction.

We demonstrate the usefulness of our techniques in a variety of applications, encompassing different areas, such as computer vision, graphics and interactive applications. Examples include methods based on depth edges for non-photorealistic rendering, hand gesture recognition, improving 3D reconstruction, specular reflection reduction, and medical imaging.


R. Raskar, K. Tan, R. Feris, J. Kobler, J. Yu and M. Turk 
Harnessing Real-World Depth Edges with Multi-Flash Imaging
IEEE Computer Graphics and Applications, January 2005.

R. Feris, M. Turk, R. Raskar, K. Tan and G. Ohashi
Exploiting depth discontinuities for vision-based fingerspelling recognition
IEEE Workshop on Real-Time Vision for Human-Computer Interaction, Washington DC, USA, June 2004. 

R. Feris, R. Raskar, K. Tan and M. Turk 
Specular reflection reduction with multi-flash imaging
IEEE Brazilian Symposium on Computer Graphics and Image Processing, Curitiba, Brazil, 2004.

R. Raskar, K. Tan, R. Feris, J. Yu and M. Turk 
A non-photorealistic camera: depth edge detection and stylized rendering using multi-flash imaging
ACM SIGGRAPH, Los Angeles, August 2004. 

K. Tan, J. Kobler, R. Feris, P. Dietz and R. Raskar
Shape enhanced surgical visualizations and medical illustrations with multi-flash imaging
International Conference on Medical Imaging Computing and Computer Assisted Intervention, 2004.


Depth Edge Detection with Multi-Flash Imaging

More Information

NPR Camera Homepage
Rogerio Feris Homepage