Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
489745 | Procedia Computer Science | 2015 | 8 Pages |
With the increasing emergence of ambient intelligence, sensors and wireless network technologies, robotic assistance becomes a very active area of research in autonomous intelligent systems. Robotic systems would be integrated in the environment as physical autonomous entities. These entities will be able to interact independently with the ambient environment and provide services such as assistance to people at homes, offices, buildings and public spaces. Furthermore, robots as cognitive entities will be able to coordinate their activities with other physical or logical entities, to move, to feel and explore the surrounding environment, decide and act to meet the situations they may encounter. These cognitive operations will be part of a smart network which can provide individually or collectively, new features and various support services anywhere and anytime. The aim of this research work is to build a multimodal fusion engine using the semantic web. This multimodal system will be applied on a wheelchair with a manipulated arm to help people with disabilities interact with their main tool of movement and their environment. This work focuses on building a multimodal interaction fusion engine to better understand the multimodal inputs using the concept of ontology.