Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4946898 | Neurocomputing | 2017 | 21 Pages |
Abstract
Modern nonlinear dimensionality reduction (DR) techniques project high dimensional data to low dimensions for their visual inspection. Provided the intrinsic data dimensionality is larger than two, DR necessarily faces information loss and the problem becomes ill-posed. Discriminative dimensionality reduction (DiDi) offers one intuitive way to reduce this ambiguity: it allows a practitioner to identify what is relevant and what should be regarded as noise by means of intuitive auxiliary information such as class labels. One powerful DiDi method relies on a change of the data metric based on the Fisher information. This technique has been presented for vectorial data so far. The aim of this contribution is to extend the technique to more general data structures which are characterised in terms of pairwise similarities only by means of a kernelisation. We demonstrate that a computation of the Fisher metric is possible in kernel space, and that it can efficiently be integrated into modern DR technologies such as t-SNE or faster Barnes-Hut-SNE. We demonstrate the performance of the approach in a variety of benchmarks.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Alexander Schulz, Johannes Brinkrolf, Barbara Hammer,