Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6865600 | Neurocomputing | 2015 | 7 Pages |
Abstract
The hubness phenomenon is a recently discovered aspect of the curse of dimensionality. Hub objects have a small distance to an exceptionally large number of data points while anti-hubs lie far from all other data points. A closely related problem is the concentration of distances in high-dimensional spaces. Previous work has already advocated the use of fractional âp norms instead of the ubiquitous Euclidean norm to avoid the negative effects of distance concentration. However, which exact fractional norm to use is a largely unsolved problem. The contribution of this work is an empirical analysis of the relation of different âp norms and hubness. We propose an unsupervised approach for choosing an âp norm which minimizes hubs while simultaneously maximizing nearest neighbor classification. Our approach is evaluated on seven high-dimensional data sets and compared to three approaches that re-scale distances to avoid hubness.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Arthur Flexer, Dominik Schnitzer,