Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
397195 | International Journal of Approximate Reasoning | 2007 | 20 Pages |
Sequential statistical models such as dynamic Bayesian networks and hidden Markov models more specifically, model stochastic processes over time. In this paper, we study for these models the effect of consecutive similar observations on the posterior probability distribution of the represented process. We show that, given such observations, the posterior distribution converges to a limit distribution. Building upon the rate of the convergence, we further show that, given some wished-for level of accuracy, part of the inference can be forestalled. To evaluate our theoretical results, we study their implications for a real-life model from the medical domain and for a benchmark model for agricultural purposes. Our results indicate that whenever consecutive similar observations arise, the computational requirements of inference in Markovian models can be drastically reduced.