کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
6937552 868996 2016 11 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Enhanced control of a wheelchair-mounted robotic manipulator using 3-D vision and multimodal interaction
ترجمه فارسی عنوان
کنترل پیشرفته یک دستگیره روباتیک با استفاده از صندلی چرخدار با استفاده از سهبعدی بینایی و تعامل چندجمله ای
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر چشم انداز کامپیوتر و تشخیص الگو
چکیده انگلیسی
This paper presents a multiple-sensors, 3D vision-based, autonomous wheelchair-mounted robotic manipulator (WMRM). Two 3D sensors were employed: one for object recognition, and the other for recognizing body parts (face and hands). The goal is to recognize everyday items and automatically interact with them in an assistive fashion. For example, when a cereal box is recognized, it is grasped, poured in a bowl, and brought to the user. Daily objects (i.e. bowl and hat) were automatically detected and classified using a three-steps procedure: (1) remove background based on 3D information and find the point cloud of each object; (2) extract feature vectors for each segmented object from its 3D point cloud and its color image; and (3) classify feature vectors as objects after applying a nonlinear support vector machine (SVM). To retrieve specific objects, three user interface methods were adopted: voice-based, gesture-based, and hybrid commands. The presented system was tested using two common activities of daily living -- feeding and dressing. The results revealed that an accuracy of 98.96% is achieved for a dataset with twelve daily objects. The experimental results indicated that hybrid (gesture and speech) interaction outperforms any single modal interaction.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Computer Vision and Image Understanding - Volume 149, August 2016, Pages 21-31
نویسندگان
, , , ,