Home

A Structured Light Based Approach to Depth Edge Detection

A Structured Light Based Approach to Depth Edge Detection

Jiyoung Park, Cheolhwon Kim, Jaekeun Na, Juneho Yi and Matthew Turk

Overview

The goal of this research is to produce a depth edge map of the real world scene. We strategically project structured light and exploit distortion of the light pattern in the structured light image along depth discontinuities to reliably detect depth edges. We present a method that guarantees detection of depth edges for a given range of object location with an accurate control of key parameters such as detectable depth difference and stripe width. Experimental results show that the proposed method accurately detects depth edges of shapes of human hands and bodies as well as general objects.

Details

1.1 Basic idea

We illustrate in Fig. 1 the basic idea for depth edge detection. First, as can be seen in Fig. 1 (a), we project a white light and a structured light consecutively onto a scene where depth edges are to be detected. The structured light contains a special light pattern. In this work, we have placed the projector and camera vertically so that we use a pattern comprising simple black and white horizontal stripes of equal width. Vertical stripes can be used with a similar analysis. We capture the white light image and then the structured light image. Second, we extract horizontal patterns by differencing the white light and structured light images and using a robust thresholding method. We call this difference image the pattern image (see Fig. 1 (b)). Third, we identify depth edges in the pattern image guided by edge information from the white light image.

We exploit distortion of the light pattern in the structured light image along depth edges. Since the horizontal pattern can be considered a periodic signal with a specific frequency, we can easily detect candidate locations for depth edges by applying a Gabor filter to the pattern image. The amplitude response of the Gabor filter is very low where distortion of light pattern occurs. Fig. 1 (c) illustrates this process. Finally, we accurately locate depth edges using edge information from the white light image, yielding a final depth edge map as in Fig. 1 (d).

Fig1

Fig. 2. Illustration of the basic idea to compute a depth edge map: (a) capture of a white light image and structured light image, (b) pattern image, (c) detection of depth edges by applying a Gabor filter to the pattern image with edge information from the white light image, (d) final depth edge map.

1.2 Extending the detectable range of depth edges with parameter control

In practice, distortion along depth discontinuities may not occur or be sufficient to detect depending on the distance from the camera or projector. Fig. 2 shows an example situation. Along the depth edges between objects A and B, and between objects C and D, the distortion, i. e., the offset of the pattern, almost disappears. This makes it infeasible to detect these depth edges using a Gabor filter.

We present a method that is based on a single camera and projector setup to guarantee the occurrence of the distortion along depth discontinuities irrespective of object location. As shown in Fig. 3, we use additional structured light whose spatial period is halved such as . Accordingly, the range, denoted by a, of detectable offset, , is extended. We have used a general purpose LCD projector; however, an infrared projector can be employed with the same analysis in order to apply the method to humans.

Fig2

Fig. 2. Problem of disappearance of distortion along depth discontinuities depending on the distance of an object from the camera and projector: (a) white light image, (b) pattern image, (c) amplitude response of Gabor filter. Along the depth edges between objects A and B, and between objects C and D, in the pattern image (b), the distortion of pattern almost disappears. This makes it not feasible to detect these depth edges using a Gabor filter.

Fig12

Fig. 3. The detectable range of depth edges can be extended by projecting additional structured light with different stripe widths.

The detectable range of depth edges, , is computed in the following two steps:

Step 1: Determination of the width of a stripe, , in the structured light

First, we set to a distance from the camera to the farthest background. Given the minimum distance between object points, , can be computed as:

(1)

a, d and f denote the distances between camera and object point, between camera and projector, and between camera and virtual image plane, respectively.

Step 2: The minimum of the detectable range,

Let denote the maximum distance between object points in the range that guarantees the occurrence of the distortion along depth discontinuities. Then

.                  (2)

where represents the maximum amount visual offset of patterns. This way, we are guaranteed to detect depth edges of all object points located in the range and separated in depth no less than .

1.3 Experimental results

Fig. 4 (a) and (b) display the front and side views of the scene, respectively. All the objects are located within the range of 2.4m ~ 3m from the camera. Setting = 3m, = 0.173m, = 3m and = 0.1m, and are determined as 0.0084m and 2.325m, respectively. That is, the detectable range of depth edges becomes [2.325m, 3m] and the length of the range is 0.675m. Thus, the widths of stripes of the three structured light that guarantee the detection of depth edges in this range are , and . Fig. 4 (c)-(e) show pattern images and their Gabor amplitude maps in the three cases. Each Gabor amplitude map shows that we cannot detect all the depth edges in the scene using a single structured light image. However, combining the results from the three cases, we can obtain the final Gabor amplitude map as in Fig. 4 (f) where distortion for detection is guaranteed to appear along depth discontinuities in the range of [2.325m, 3m]. Finally, we can get the depth edge map as in Fig. 4 (g). The result shows that this method is capable of detecting depth edges of all the objects located in the detectable range. We have also displayed the output of the traditional Canny edge detector for comparison.

Fig20_2

Fig. 4. Detecting depth edges using a single camera and projector.

Publication

J. Park, C. Kim, J. Na, J. Yi and M. Turk, Using Structured Light for Efficient Depth Edge Detection, to appear in Image and Vision Computing,(http://dx.doi.org/10.1016/j.imavis.2008.01.006)

C. Kim, J. Park, J. Yi and M. Turk, Efficient depth edge detection using structured light, International Symposium on Visual Computing, Lake Tahoe, December 5-7, 2005 (http://ilab.cs.ucsb.edu/publications/isvc05.pdf)

C. Kim, J. Park, J. Yi and M. Turk, Structured light based depth edge detection for object shape recovery, IEEE CVPR Workshop on Projector-Camera Systems, San Diego, June 25, 2005 (http://ilab.cs.ucsb.edu/publications/PROCAMS.pdf)