کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
563534 | 1451939 | 2016 | 11 صفحه PDF | دانلود رایگان |
• We proposed a variant of NN-LMS algorithm with balanced weight convergence rates.
• Accurate performance analysis is performed for a general nonstationarity model.
• The sparse system identification problem can be solved via the derived algorithm.
Statistical inference subject to nonnegativity constraints is a frequently occurring problem in learning problems. The nonnegative least-mean-square (NNLMS) algorithm was derived to address such problems in an online way. This algorithm builds on a fixed-point iteration strategy driven by the Karush–Kuhn–Tucker conditions. It was shown to provide low variance estimates, but it however suffers from unbalanced convergence rates of these estimates. In this paper, we address this problem by introducing a variant of the NNLMS algorithm. We provide a theoretical analysis of its behavior in terms of transient learning curve, steady-state and tracking performance. We also introduce an extension of the algorithm for online sparse system identification. Monte-Carlo simulations are conducted to illustrate the performance of the algorithm and to validate the theoretical results.
Journal: Signal Processing - Volume 128, November 2016, Pages 131–141