Article ID Journal Published Year Pages File Type
527610 Image and Vision Computing 2007 11 Pages PDF
Abstract

We present an algorithm for efficient depth calculations and view synthesis. The main goal is the on-line generation of realistic interpolated views of a dynamic scene. The inputs are video-streams originating from two or more calibrated, static cameras.Efficiency is accomplished by the parallel use of the CPU and the GPU in a multi-threaded implementation. The input images are projected on a plane sweeping through 3D space, using the hardware accelerated transformations available on the GPU. A correlation measure is calculated simultaneously for all pixels on the plane and is compared at the different plane positions. A noisy ‘virtual’ view and a crude depth map result in very limited time. We apply a min-cut/max-flow algorithm on a graph, implemented on the CPU, to ameliorate this result by a global optimisation.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, ,