کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
411273 | 679513 | 2016 | 13 صفحه PDF | دانلود رایگان |
• Multimodal approach for starting engagement detection using non-explicit cues.
• Results show that our approach performs better than spatial one in all conditions.
• MRMR strategy reduces the features space to 7 features without a performance loss.
• Validation of Schegloff (sociologist) meaningful features for engagement detection.
• A robot centered labeled corpus of 4 hours in a home-like environment.
Recognition of intentions is a subconscious cognitive process vital to human communication. This skill enables anticipation and increases the quality of interactions between humans. Within the context of engagement, non-verbal signals are used to communicate the intention of starting the interaction with a partner. In this paper, we investigated methods to detect these signals in order to allow a robot to know when it is about to be addressed. Originality of our approach resides in taking inspiration from social and cognitive sciences to perform our perception task. We investigate meaningful features, i.e. human readable features, and elicit which of these are important for recognizing someone’s intention of starting an interaction. Classically, spatial information like the human position and speed, the human–robot distance are used to detect the engagement. Our approach integrates multimodal features gathered using a companion robot equipped with a Kinect. The evaluation on our corpus collected in spontaneous conditions highlights its robustness and validates the use of such a technique in a real environment. Experimental validation shows that multimodal features set gives better precision and recall than using only spatial and speed features. We also demonstrate that 7 selected features are sufficient to provide a good starting engagement detection score. In our last investigation, we show that among our full 99 features set, the space reduction is not a solved task. This result opens new researches perspectives on multimodal engagement detection.
Journal: Robotics and Autonomous Systems - Volume 75, Part A, January 2016, Pages 4–16