Article ID Journal Published Year Pages File Type
408868 Neurocomputing 2008 8 Pages PDF
Abstract

We study data fusion under the assumption that data source-specific variation is irrelevant and only shared variation is relevant. Traditionally the shared variation has been sought by maximizing a dependency measure, such as correlation of linear projections in canonical correlation analysis (CCA). In this traditional framework it is hard to tackle overfitting and model order selection, and thus we turn to probabilistic generative modeling which makes all tools of Bayesian inference applicable. We introduce a family of probabilistic models for the same task, and present conditions under which they seek dependency. We show that probabilistic CCA is a special case of the model family, and derive a new dependency-seeking clustering algorithm as another example. The solution is computed with variational Bayes.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,