Article ID Journal Published Year Pages File Type
442843 Journal of Computational Design and Engineering 2015 6 Pages PDF
Abstract

In this paper, we propose a new approach to providing interactive and collaborative lens among multi-users for supporting level-of-detailed views using smartphones in a public display. In order to provide smartphone-based lens capability, the locations of smartphones are effectively detected and tracked using Kinect, which provides RGB data and depth data (RGB-D). In particular, human skeleton information is extracted from the Kinect 3D depth data to calculate the smartphone location more efficiently and correctly with respect to the public display and to support head tracking for easy target selection and adaptive view generation. The suggested interactive and collaborative lens using smartphones not only can explore local spaces of the shared display but also can provide various kinds of activities such as LOD viewing and collaborative interaction. Implementation results are given to show the advantage and effectiveness of the proposed approach.

Related Topics
Physical Sciences and Engineering Computer Science Computer Graphics and Computer-Aided Design
Authors
, ,