کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
4638090 | 1631993 | 2016 | 18 صفحه PDF | دانلود رایگان |
Descent gradient methods are the most frequently used algorithms for computing regularizers of inverse problems. They are either directly applied to the discrepancy term, which measures the difference between operator evaluation and data or to a regularized version incorporating suitable penalty terms. In its basic form, gradient descent methods converge slowly.We aim at extending different optimization schemes, which have been recently introduced for accelerating these approaches, by addressing more general penalty terms. In particular we use a general setting in infinite Hilbert spaces and examine accelerated algorithms for regularization methods using total variation or sparsity constraints.To illustrate the efficiency of these algorithms, we apply them to a parameter identification problem in an elliptic partial differential equation using total variation regularization.
Journal: Journal of Computational and Applied Mathematics - Volume 298, 15 May 2016, Pages 105–122