| Article ID | Journal | Published Year | Pages | File Type | 
|---|---|---|---|---|
| 412884 | Neurocomputing | 2010 | 10 Pages | 
Abstract
												We propose simple transformation of the hidden states in variational Bayesian factor analysis models to speed up the learning procedure. The speed-up is achieved by using proper parameterization of the posterior approximation which allows joint optimization of its individual factors, thus the transformation is theoretically justified. We derive the transformation formulae for variational Bayesian factor analysis and show experimentally that it can significantly improve the rate of convergence. The proposed transformation basically performs centering and whitening of the hidden factors taking into account the posterior uncertainties. Similar transformations can be applied to other variational Bayesian factor analysis models as well.
Related Topics
												
													Physical Sciences and Engineering
													Computer Science
													Artificial Intelligence
												
											Authors
												Jaakko Luttinen, Alexander Ilin, 
											