Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6937574 | Computer Vision and Image Understanding | 2016 | 13 Pages |
Abstract
Current and near-term implantable prosthetic vision systems offer the potential to restore some visual function, but suffer from limited resolution and dynamic range of induced visual percepts. This can make navigating complex environments difficult for users. We introduce semantic labeling as a technique to improve navigation outcomes for prosthetic vision users. We produce a novel egocentric vision dataset to demonstrate how semantic labeling can be applied to this problem. We also improve the speed of semantic labeling with sparse computation of unary potentials, enabling its use in real-time wearable assistive devices. We use simulated prosthetic vision to demonstrate the results of our technique. Our approach allows a prosthetic vision system to selectively highlight specific classes of objects in the user's field of view, improving the user's situational awareness.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Vision and Pattern Recognition
Authors
Lachlan Horne, Jose Alvarez, Chris McCarthy, Mathieu Salzmann, Nick Barnes,