Article ID Journal Published Year Pages File Type
566815 Signal Processing 2009 12 Pages PDF
Abstract

A two-stage clustering-then-ℓ1ℓ1-optimization approach has been often used for sparse component analysis (SCA). The first challenging task of this approach is to estimate the basis matrix by cluster analysis. In this paper, a robust K-hyperline clustering (K-HLC) algorithm is developed for this task. The novelty of our method is that it is not only able to implement hyperline clustering, but also is capable of detecting the number of hidden hyperlines (or sparse components). K-HLC seamlessly integrates “the hyperline clustering” and “hyperline number detection” in the same algorithm. In addition, three strategies are proposed to tackle this problem: (1) reject the outliers by overestimating the number of hyperlines; (2) escape from local minima by using a multilayer initialization and (3) suppress the noise by a multilayer K-HLC. By taking these strategies into account, the robust K-HLC procedure can be briefly described as follows: first, we overestimate the number of hyperlines; then, a confidence index is given to evaluate the significance of each hyperline. Subsequently, we determine the number of hyperlines by checking the gap in the sorted confidence indices. Moreover, we select those hyperlines corresponding to large confidence indices with high rank priority and remove spurious ones with small confidence indices. The high performance of our clustering scheme is illustrated by extensive numerical experiments including some challenging benchmarks, e.g., very ill-conditioned basis matrix (Hilbert matrix), or the observations with strong outliers.

Related Topics
Physical Sciences and Engineering Computer Science Signal Processing
Authors
, , , , ,