Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6865265 | Neurocomputing | 2018 | 39 Pages |
Abstract
We consider the problem of hierarchical sparse coding, where not only a few groups of atoms are active at a time but also each group enjoys internal sparsity. The current approaches are usually to achieve between-group sparsity using the â1 penalty, such that many groups have small coefficients rather than being accurately zeroed out. The trivial groups may incur the proneness to overfitting of noise and are thereby harmful to interpretability of sparse representation. To this end, we in this paper reformulate the hierarchical sparse model from a Bayesian perspective employing twofold priors: the spike-and-slab prior and the Laplacian prior. The former is utilized to explicitly induce between-group sparsity, while the latter is adopted for both inducing within-group sparsity and obtaining a small reconstruction error. We propose a nest prior by integrating the both priors to result in hierarchical sparsity. The resultant optimization problem can be delivered a convergence solution in a few iterations via the proposed nested algorithm, corresponding to the nested prior. In experiments, we evaluate the performance of our method on signal recovery, image inpainting and sparse representation based classification, with simulated signals and two publicly available image databases. The results manifest that the proposed method, compared with the popular methods for sparse coding, can yield more concise representation and more reliable interpretation of data.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Zhang Yupei, Xiang Ming, Yang Bo,