کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
530269 869755 2015 13 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Iterative Nearest Neighbors
ترجمه فارسی عنوان
نزدیک ترین همسایگان نزدیک
کلمات کلیدی
نزدیکترین همسایگان نزدیک، کمترین مربعات، نمایندگی انحصاری، نمایندگی همکاری، طبقه بندی، کاهش ابعاد
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر چشم انداز کامپیوتر و تشخیص الگو
چکیده انگلیسی


• We propose the Iterative Nearest Neighbors (INN) representation and its refined variant.
• INN proves on par or better performance with MP and OMP on sparse signal recovery task.
• We derive INN-based dimensionality reduction and classification methods.
• We use face (AR), traffic sign (GTSRB), scene (Scene-15) and obj. class (Pascal VOC) benchmarks.
• INN proves performance on par with SR but running times closer to NN, for low dimensional data.

Representing data as a linear combination of a set of selected known samples is of interest for various machine learning applications such as dimensionality reduction or classification. k-Nearest Neighbors (k NN) and its variants are still among the best-known and most often used techniques. Some popular richer representations are Sparse Representation (SR) based on solving an l1-regularized least squares formulation, Collaborative Representation (CR) based on l2-regularized least squares, and Locally Linear Embedding (LLE) based on an l1-constrained least squares problem. We propose a novel sparse representation, the Iterative Nearest Neighbors (INN). It combines the power of SR and LLE with the computational simplicity of k NN. We empirically validate our representation in terms of sparse support signal recovery and compare with similar Matching Pursuit (MP) and Orthogonal Matching Pursuit (OMP), two other iterative methods. We also test our method in terms of dimensionality reduction and classification, using standard benchmarks for faces (AR), traffic signs (GTSRB), and objects (PASCAL VOC 2007). INN compares favorably to NN, MP, and OMP, and on par with CR and SR, while being orders of magnitude faster than the latter. On the downside, INN does not scale well with higher dimensionalities of the data.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Pattern Recognition - Volume 48, Issue 1, January 2015, Pages 60–72
نویسندگان
, ,