Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
487357 | Procedia Computer Science | 2015 | 7 Pages |
Vision- based approaches of recognition of sign languages have made spectacular advances in the last few years. These also include many works in the area of speech processing to convert speech to text. A vision-based approach to classify facial gestures (lip movement, eye brow pattern etc.) for communication designed especially for the differently abled persons is a less explored area. In our work, we explore certain approaches to classify facial gestures to enhance its effectiveness and incorporate it to any sign language or vision-based gesture recognition movements for precise decision making. In our work, we have designed a real time system to detect alphabets by recognizing the lip pattern based on texture and shape. The system takes live video input and processes it in real time. Object detector of computer vision toolbox is used to classify the lips from extracted frames of video input. Five consecutive frames are extracted so as to trace the movements caused while speaking a particular syllable. Histogram of oriented gradients (HOG) of extracted lip image is used as features for recognition. The recognizer is designed using Artificial Neural Network (ANN) to recognize four classes viz. the lips movements formed for the four alphabets ‘A’,’B’,’C’,’D’. The entire systemis modelled and tested for real time performance with a video of 10 frames per second. Experimental results show that the system provides satisfactory performance with recognition rate as high as 90.67%.