Article ID Journal Published Year Pages File Type
6863559 Neurocomputing 2018 8 Pages PDF
Abstract
Hidden Markov models (HMMs) are a popular approach for modeling continuous sequential data, typically based on the assumption of Gaussian-distributed observations. A significant issue HMMs with Gaussian conditional densities are confronted with concerns effectively modeling high-dimensional observations, without getting prone to overfitting or singularities. To this end, one can resort to extracting lower-dimensional latent variable representations of the observed high-dimensional data, as part of the inference algorithm of the postulated HMM. Factor analysis (FA) is a well-established linear latent variable scheme that can be employed for this purpose; its functionality consists in modeling the covariances between the elements of multivariate observations under a set of linear assumptions. Recently, it has been proposed that FA can be effectively generalized under an efficient large-margin Bayesian inference perspective, namely maximum entropy discrimination (MED). This work capitalizes on these recent findings to derive an effective HMM-driven sequential data modeling framework for high-dimensional data. Our proposed approach extracts lower-dimensional latent variable representations of observed high-dimensional data, taking into account the large-margin principle. On this basis, it postulates that the data temporal dynamics are conditional to the inferred values of these latent variables. We devise efficient mean-field inference algorithms for our model, and exhibit its advantages through a set of experiments.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
,