کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
535027 | 870312 | 2016 | 8 صفحه PDF | دانلود رایگان |

• Segmentation results show improvements over current state of the art.
• First four-class segmentation and comparison of the SUNRGBD dataset.
• Shows the successful introduction of the MMSSL framework to RGBD scene segmentation.
• Proposes new room layout features for the problem domain.
• Presents a new strategy to align the depth map to the floor.
Depth images have granted new possibilities to computer vision researchers across the field. A prominent task is scene understanding and segmentation on which the present work is concerned. In this paper, we present a procedure combining well known methods in a unified learning framework based on stacked classifiers; the benefits are two fold: on one hand, the system scales well to consider different types of complex features and, on the other hand, the use of stacked classifiers makes the performance of the proposed technique more accurate. The proposed method consists of a random forest using random offset features in combination with a conditional random field (CRF) acting on a simple linear iterative clustering (SLIC) superpixel segmentation. The predictions of the CRF are filtered spatially by a multi-scale decomposition before merging it with the original feature set and applying a stacked random forest which gives the final predictions. The model is tested on the renown NYU-v2 dataset and the recently available SUNRGBD dataset. The approach shows that simple multimodal features with the power of using multi-class multi-scale stacked sequential learners (MMSSL) can achieve slight better performance than state of the art methods on the same dataset. The results show an improvement of 2.3% over the base model by using MMSSL and displays that the method is effective in this problem domain.
Figure optionsDownload high-quality image (244 K)Download as PowerPoint slide
Journal: Pattern Recognition Letters - Volume 80, 1 September 2016, Pages 208–215