Abstract |
: |
The use of control variates is a well-known variance reduction tech- nique in Monte Carlo integration. If the optimal linear combination of control variates is estimated by ordinary least squares and if the num- ber of control variates is allowed to grow to infinity, the convergence rate can be accelerated, the new rate depending on the interplay be- tween the integrand and the control functions. The standardized error is still asymptotically normal and the asymptotic variance can still be estimated by the residual variance in the underlying regression model. The ordinary least squares estimator is shown to be superior to other, possibly simpler control variate estimators, even at equal computation time. The method is applied to increase the precision of the method of maximum simulated likelihood to deal with latent variables. Its performance is found to be particularly good because of two reasons: the integrands are smooth and can thus be approximated well by poly- nomial or spline control functions; the number of integrands is large, reducing the computational cost since the Monte Carlo integration weights need to be calculated only once. |