کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
412593 | 679657 | 2012 | 7 صفحه PDF | دانلود رایگان |
Distance metric is of considerable importance in varieties of machine learning and pattern recognition applications. Neighborhood component analysis (NCA), one of the most successful metric learning algorithms, suffers from the high computational cost, which makes it only suitable for small-scale classification tasks. To overcome this disadvantage, we proposed a fast neighborhood component analysis (FNCA) method. For a given sample, FNCA adopts a local probability distribution model constructed based on its K nearest neighbors from the same class and from the different classes. We further extended FNCA to nonlinear metric learning scenarios using the kernel trick. Experimental results show that, compared with NCA, FNCA not only significantly increases the training speed but also obtains higher classification accuracy. Furthermore, comparative studies with the state-of-the-art approaches on various real-world datasets also verify the effectiveness of the proposed linear and nonlinear FNCA methods.
Journal: Neurocomputing - Volume 83, 15 April 2012, Pages 31–37