Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6864685 | Neurocomputing | 2018 | 8 Pages |
Abstract
Many problems in computer vision and pattern recognition can be posed as learning low-dimensional subspace structures from high-dimensional data. Subspace clustering represents a commonly utilized subspace learning strategy. The existing subspace clustering models mainly adopt a deterministic loss function to describe a certain noise type between an observed data matrix and its self-expressed form. However, the noises embedded in practical high-dimensional data are generally non-Gaussian and have much more complex structures. To address this issue, this paper proposes a robust subspace clustering model by embedding the Mixture of Gaussians (MoG) noise modeling strategy into the low-rank representation (LRR) subspace clustering model. The proposed MoG-LRR model is capitalized on its adapting to a wider range of noise distributions beyond current methods due to the universal approximation capability of MoG. Additionally, a penalized likelihood method is encoded into this model to facilitate selecting the number of mixture components automatically. A modified Expectation Maximization (EM) algorithm is also designed to infer the parameters involved in the proposed PMoG-LRR model. The superiority of our method is demonstrated by extensive experiments on face clustering and motion segmentation datasets.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Jing Yao, Xiangyong Cao, Qian Zhao, Deyu Meng, Zongben Xu,