کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
401144 | 1438980 | 2015 | 16 صفحه PDF | دانلود رایگان |
• We propose a system for virtual task prediction in pen-based interfaces.
• Our system infers intended user actions by analyzing eye gaze movements.
• First contribution is a carefully compiled multimodal dataset of gaze and pen data.
• Second contribution is a novel gaze-based feature representation.
• Third contribution is a task- and scale-invariant virtual task prediction system.
In typical human–computer interaction, users convey their intentions through traditional input devices (e.g. keyboards, mice, joysticks) coupled with standard graphical user interface elements. Recently, pen-based interaction has emerged as a more intuitive alternative to these traditional means. However, existing pen-based systems are limited by the fact that they rely heavily on auxiliary mode switching mechanisms during interaction (e.g. hard or soft modifier keys, buttons, menus). In this paper, we describe how eye gaze movements that naturally occur during pen-based interaction can be used to reduce dependency on explicit mode selection mechanisms in pen-based systems. In particular, we show that a range of virtual manipulation commands, that would otherwise require auxiliary mode switching elements, can be issued with an 88% success rate with the aid of users׳ natural eye gaze behavior during pen-only interaction.
Journal: International Journal of Human-Computer Studies - Volume 73, January 2015, Pages 91–106