Article ID Journal Published Year Pages File Type
536635 Pattern Recognition Letters 2008 13 Pages PDF
Abstract

This paper proposes a multiple facial feature interface that allows disabled users with various disabilities to implement different mouse operations. Using a regular PC camera, the proposed system detects the user’s eye and mouth movements, and then interprets the communication intent to control the computer. Here, mouse movements are implemented based on the user’s eye movements, while clicking events are implemented based on the user’s mouth shapes, such as opening/closing. The proposed system is composed of three modules: facial feature detector, facial feature tracker, and mouse controller. The facial region is initially identified using a skin-color model and connected-component (CC) analysis. Thereafter, the eye regions are localized using a neural network (NN)-based texture classifier that discriminates the facial region into eye class and non-eye class, then the mouth region is localized using an edge detector. Once the eye and mouth regions are localized, they are continuously and accurately tracking using a mean-shift algorithm and template matching, respectively. Based on the tracking results, the mouse movements and clicks are then implemented. To assess the validity of the proposed method, it was applied to three applications: a web browser, ‘spelling board’, and the game ‘catching-a-bird’. The two test groups involved 34 users, and the results showed that the proposed system could be efficiently and effectively applied as a user-friendly and convenient communication device.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , ,