Article ID Journal Published Year Pages File Type
441725 Computers & Graphics 2006 10 Pages PDF
Abstract

Traditional interaction with virtual environments (VE) via widgets or menus forces users to rigidly sequential interactions. Previous research has proved that the adoption of speech recognition (SR) allows more flexible and natural forms of interaction resembling the human-to-human communication pattern. This feature though requires programmers to compile some human supplied knowledge in the form grammars. These are then used at runtime to process spoken utterances into complete commands. Further speech recognition (SR) must be hard-coded into the application.This paper presents a completely automatic process to build a body of knowledge from the information embedded within the application source code. The programmer in fact embeds, throughout the coding process, a vast amount of semantic information. This research work exploits this semantic richness and it provides a self-configurable system, which automatically adapts its understanding of human commands according to the content and to the semantic information defined within the application's source code.

Related Topics
Physical Sciences and Engineering Computer Science Computer Graphics and Computer-Aided Design
Authors
, , ,