کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
563153 | 875472 | 2013 | 12 صفحه PDF | دانلود رایگان |

This paper presents a quantized kernel least mean square algorithm with a fixed memory budget, named QKLMS-FB. In order to deal with the growing support inherent in online kernel methods, the proposed algorithm utilizes a pruning criterion, called significance measure, based on a weighted contribution of the existing data centers. The basic idea of the proposed methodology is to discard the center with the smallest influence on the whole system, when a new sample is included in the dictionary. The significance measure can be updated recursively at each step which is suitable for online operation. Furthermore, the proposed methodology does not need any a priori knowledge about the data and its computational complexity is linear with the center number. Experiments show that the proposed algorithm successfully prunes the least “significant” centers and preserves the important ones, resulting in a compact KLMS model with little loss in accuracy.
► An efficient pruning and growing strategy in designing fixed-budget QKLMS is proposed.
► Priori knowledge of the input distribution is not required.
► Recursive calculation leads to a updating strategy to be computed in real time.
Journal: Signal Processing - Volume 93, Issue 9, September 2013, Pages 2759–2770