|کد مقاله||کد نشریه||سال انتشار||مقاله انگلیسی||ترجمه فارسی||نسخه تمام متن|
|382212||660745||2016||10 صفحه PDF||سفارش دهید||دانلود رایگان|
این مقاله ISI می تواند منبع ارزشمندی برای تولید محتوا باشد.
- تولید محتوا برای سایت و وبلاگ
- تولید محتوا برای کتاب
- تولید محتوا برای نشریات و روزنامه ها
پایگاه «دانشیاری» آمادگی دارد با همکاری مجموعه «شهر محتوا» با استفاده از این مقاله علمی، برای شما به زبان فارسی، تولید محتوا نماید.
• This paper presents a prototype to assist blind people in indoor environments.
• The prototype incorporates recognition and guidance units.
• It comprises also a voice-user interface.
• Tests in a public indoor space demonstrate promising capabilities.
Assistive technologies for blind people are showing a fast growth, providing useful tools to support daily activities and to improve social inclusion. Most of these technologies are mainly focused on helping blind people to navigate and avoid obstacles. Other works emphasize on providing them assistance to recognize their surrounding objects. Very few of them however couple both aspects (i.e., navigation and recognition). With the aim to address the aforesaid needs, we describe in this paper an innovative prototype, which offers the capabilities to (i) move autonomously and to (ii) recognize multiple objects in public indoor environments. It incorporates lightweight hardware components (camera, IMU, and laser sensors), all mounted on a reasonably-sized integrated device to be placed on the chest. It requires the indoor environment to be ‘blind-friendly’, i.e., prior information about it should be prepared and loaded in the system beforehand. Its algorithms are mainly based on advanced computer vision and machine learning approaches. The interaction between the user and the system is performed through speech recognition and synthesis modules. The prototype offers to the user the possibility to (i) walk across the site to reach the desired destination, avoiding static and mobile obstacles, and (ii) ask the system through vocal interaction to list the prominent objects in the user's field of view. We illustrate the performances of the proposed prototype through experiments conducted in a blind-friendly indoor space equipped at our Department premises.
Journal: Expert Systems with Applications - Volume 46, 15 March 2016, Pages 129–138