Article ID Journal Published Year Pages File Type
4942886 Entertainment Computing 2017 31 Pages PDF
Abstract
This work presents a methodology for generic facial expression transfer, aiming to speed the process of generating facial animation for interactive applications. We propose an adaptive and semiautomatic methodology, which allows to transfer facial expressions from a face mesh to another. The model has three main stages: rigging, expression transfer and animation, where the output meshes can be used as key poses for blendshape-based animation. The input of the model is a face mesh in neutral pose and a set of face data that can be provided from different sources, such as artist crafted meshes and motion capture data. The model generates a set of blendshapes corresponding to the input set, with minimum user intervention. We used a simple rig structure in order to provide a trivial correspondence either with sparse facial feature points based systems or dense geometric data supplied by RGBD based systems. The rig structure can be refined on-the-fly to deal with different input geometric data according to the need. Results show the quality of expressions transfer assessment using face data including artist crafted meshes and performance driven animation.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,