Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
439137 | Theoretical Computer Science | 2008 | 19 Pages |
Abstract
We propose a method of unsupervised learning from stationary, vector-valued processes. A projection to a low-dimensional subspace is selected on the basis of an objective function which rewards data-variance and penalizes the variance of the velocity vector, thus exploiting the short-time dependencies of the process. We prove bounds on the estimation error of the objective in terms of the β-mixing coefficients of the process. It is also shown that maximizing the objective minimizes an error bound for simple classification algorithms on a generic class of learning tasks. Experiments with image recognition demonstrate the algorithms ability to learn geometrically invariant feature maps.
Related Topics
Physical Sciences and Engineering
Computer Science
Computational Theory and Mathematics