Article ID Journal Published Year Pages File Type
4951037 Journal of Computational Science 2017 15 Pages PDF
Abstract
We study a variance reduction strategy based on control variables for simulating the averaged macroscopic behavior of a stochastic slow-fast system. We assume that this averaged behavior can be written in terms of a few slow degrees of freedom, and that the fast dynamics is ergodic for every fixed value of the slow variable. The time derivative for the averaged dynamics can then be approximated by a Markov chain Monte Carlo method. The variance-reduced scheme that is introduced here uses the previous time instant as a control variable. We analyze the variance and bias of the proposed estimator and illustrate its performance when applied to a linear and nonlinear model problem.
Related Topics
Physical Sciences and Engineering Computer Science Computational Theory and Mathematics
Authors
, ,