Article ID Journal Published Year Pages File Type
4608687 Journal of Complexity 2013 23 Pages PDF
Abstract

We consider Monte Carlo algorithms for computing an integral θ=∫fdπθ=∫fdπ which is positive but can be arbitrarily close to 0. It is assumed that we can generate a sequence XnXn of uniformly bounded random variables with expectation θθ. Estimator θˆ=θˆ(X1,X2,…,XN) is called an (ε,α)(ε,α)-approximation if it has fixed relative precision εε at a given level of confidence 1−α1−α, that is it satisfies P(|θˆ−θ|≤εθ)≥1−α for all problem instances. Such an estimator exists only if we allow the sample size NN to be random and adaptively chosen.We propose an (ε,α)(ε,α)-approximation for which the cost, that is the expected number of samples, satisfies EN∼2lnα−1/(θε2)EN∼2lnα−1/(θε2) for ε→0ε→0 and α→0α→0. The main tool in the analysis is a new exponential inequality for randomly stopped sums.We also derive a lower bound on the worst case complexity of the (ε,α)(ε,α)-approximation. This bound behaves as 2lnα−1/(θε2)2lnα−1/(θε2). Thus the worst case efficiency of our algorithm, understood as the ratio of the lower bound to the expected sample size ENEN, approaches 1 if ε→0ε→0 and α→0α→0.An L2L2 analogue is to find θˆ such that E(θˆ−θ)2≤ε2θ2. We derive an algorithm with the expected cost EN∼1/(θε2)EN∼1/(θε2) for ε→0ε→0. To this end, we prove an inequality for the mean square error of randomly stopped sums. A corresponding lower bound also behaves as 1/(θε2)1/(θε2). The worst case efficiency of our algorithm, in the L2L2 sense, approaches 1 if ε→0ε→0.

Related Topics
Physical Sciences and Engineering Mathematics Analysis
Authors
, , ,