Article ID Journal Published Year Pages File Type
6866546 Neurocomputing 2014 8 Pages PDF
Abstract
Because of the excellent ability to characterize the sparsity of natural images, ℓ1-norm sparse representation (SR) is widely used to formulate the linear combination relationship in dictionary-learning-based face hallucination. However, due to inherently less sparse nature of noisy images, Laplacian prior assumed for ℓ1-norm seems aggressive in terms of sparsity, which ultimately leads to significant degradation of hallucination performance in the presence of noise. To this end, we suggest a moderately sparse prior model referred to as a Gaussian-Laplacian mixture (GLM) distribution and employ it to infer the optimal solution under the Bayesian framework. The resulting regularization method known elastic net (EN) not only maintains same hallucination performance as SR under noise free scenarios but also outperforms the latter remarkably in the presence of noise. The experimental results on simulation and real-world noisy images show its superiority over some state-of-the-art methods.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,