Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4948477 | Neurocomputing | 2016 | 10 Pages |
Abstract
This paper presents a distance metric learning method for k-nearest neighbors regression. We define the constraints based on triplets, which are built from the neighborhood of each training instance, to learn the distance metric. The resulting optimization problem can be formulated as a convex quadratic program. Quadratic programming has a disadvantage that it does not scale well in large-scale settings. To reduce the time complexity of training, we propose a novel dual coordinate descent method for this type of problem. Experimental results on several regression data sets show that our method obtains a competitive performance when compared with the state-of-the-art distance metric learning methods, while being an order of magnitude faster.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Bac Nguyen, Carlos Morell, Bernard De Baets,