Article ID Journal Published Year Pages File Type
4970004 Pattern Recognition Letters 2017 10 Pages PDF
Abstract
In this paper, we devise a kernel version of the recently introduced keep it simple and straightforward metric learning method, hence adding a novel dimension to its applicability in scenarios where input data is non-linearly distributed. To this end, we make use of the infinite dimensional covariance matrices and show how a matrix in a reproducing kernel Hilbert space can be projected onto the positive cone efficiently. In particular, we propose two techniques towards projecting on the positive cone in a reproducing kernel Hilbert space. The first method, though approximating the solution, enjoys a closed-form and analytic formulation. The second solution is more accurate and requires Riemannian optimization techniques. Nevertheless, both solutions can scale up very well as our empirical evaluations suggest. For the sake of completeness, we also employ the Nyström method to approximate a reproducing kernel Hilbert space before learning a metric. Our experiments evidence that, compared to the state-of-the-art metric learning algorithms, working directly in reproducing kernel Hilbert space, leads to more robust and better performances.
Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , ,