Article ID Journal Published Year Pages File Type
441548 Computers & Graphics 2012 17 Pages PDF
Abstract

This paper describes an enhanced telepresence system that offers fully dynamic, real-time 3D scene capture and continuous-viewpoint, head-tracked stereo 3D display without requiring the user to wear any tracking or viewing apparatus. We present a complete software and hardware framework for implementing the system, which is based on an array of commodity Microsoft KinectTMcolor-plus-depth cameras. Contributions include an algorithm for merging data between multiple depth cameras and techniques for automatic color calibration and preserving stereo quality even with low rendering rates. Also presented is a solution to the problem of interference that occurs between Kinect cameras with overlapping views. Emphasis is placed on a fully GPU-accelerated data processing and rendering pipeline that can apply hole filling, smoothing, data merger, surface generation, and color correction at rates of up to 200 million triangles/s on a single PC and graphics board. Also presented is a Kinect-based markerless tracking system that combines 2D eye recognition with depth information to allow head-tracked stereo views to be rendered for a parallax barrier autostereoscopic display. Enhancements in calibration, filtering, and data merger were made to improve image quality over a previous version of the system.

Graphical abstractFigure optionsDownload full-size imageDownload high-quality image (291 K)Download as PowerPoint slideHighlights► A telepresence system with real-time 3D acquisition and autostereo display. ► Multiple commodity depth cameras for 3D scene reconstruction and for user tracking. ► Real-time algorithms for filtering, data merger and automatic color calibration. ► Asynchronous, independent rates for reconstruction, rendering, and autostereo display.

Related Topics
Physical Sciences and Engineering Computer Science Computer Graphics and Computer-Aided Design
Authors
, , , ,