Article ID Journal Published Year Pages File Type
534070 Pattern Recognition Letters 2013 7 Pages PDF
Abstract

Depth acquisition becomes inexpensive after the revolutionary invention of Kinect. For computer vision applications, depth maps captured by Kinect require additional processing to fill up missing parts. However, conventional inpainting methods for color images cannot be applied directly to depth maps as there are not enough cues to make accurate inference about scene structures. In this paper, we propose a novel fusion based inpainting method to improve depth maps. The proposed fusion strategy integrates conventional inpainting with the recently developed non-local filtering scheme. The good balance between depth and color information guarantees an accurate inpainting result. Experimental results show the mean absolute error of the proposed method is about 20 mm, which is comparable to the precision of the Kinect sensor.

► We propose a color-depth fusion based depth map inpainting method. ► The fusion balances well the color and depth cues of the scene structure. ► The color structure is exploited to compensate the depth map at object boundaries. ► The mean absolute error is comparable to the precision of the depth sensor.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , , , ,