The acceptance(-rejection) method for simulating random variables

In a previous post, I covered a simple but much used method for simulating random variables or, rather, generating random variates. To simulate a random variable, the method requires, in an easy fashion, calculating the inverse of its cumulative distribution function. But you cannot always do that.

In lieu of this, the great John von Neumann wrote in a 1951 paper that you can sample a sequence of values from another probability distribution, accepting only the values that meet a certain condition based on this other distribution and the desired distribution, while rejecting all the others. The accepted values will follow the desired probability distribution. This method of simulation or sampling is called the rejection method, the acceptance method, and it has even the double-barrelled name the acceptance-rejection (AR) method.

Details

Let \(X\) be a continuous random variable with a (probability) density \(p(x)\), which is the derivative of its cumulative probability distribution \(P(X\leq x)\). The density \(p(x)\) corresponds to the desired or target distribution from which we want to sample. For whatever reason, we cannot directly simulate the random variable \(X\). (Maybe we cannot use the inverse method because \(P(X\leq x)\) is too complicated.)

The idea that von Newman had was to assume that we can easily simulate another random variable, say, \(Y\) with the (probability) density \(q(x)\). The density \(q(x)\) corresponds to a proposal distribution that we can sample (by using, for example, the inverse method).

Now we further assume that there exists some finite constant \(M>0\) such that we can bound \(p(x)\) by \(Mq(x)\), meaning

$$ p(x) \leq M q(x), \text{ for all } x . $$

Provided this, we can then sample the random variable \(Y\) and accept a value of it (for a value of \(X\)) with probability

$$\alpha = \frac{p(Y)}{Mq(Y)}.$$

If the sampled value of \(Y\) is not accepted (which happens with probability \(1-\alpha\)), then we must repeat this random experiment until a sampled value of \(Y\) is accepted.

Algorithm

We give the pseudo-code for the acceptance-rejection method suggested by von Neumann.

Random variable \(X\) with density \(p(x)\)

  1. Sample a random variable \(Y\) with density \(q(x)\), giving a sample value \(y\).
  2. Calculate the acceptance probability \(\alpha = \frac{p(y)}{Mq(y)}\).
  3. Sample a uniform random variable \(U\sim U(0,1)\), giving a sample value \(u\).
  4. Return the value \(y\) (for the value of \(X\)) if \(u\leq \alpha\), otherwise go to Step 1 and repeat.

As covered in a previous post, Steps 3 and 4 are equivalent to accepting the value \(y\) with probability \(\alpha\).

Point process application

In the context of point processes, this method is akin to thinning point processes independently. This gives a method for positioning points non-uniformly by first placing the points uniformly. The method then thins points based on the desired intensity function. As I covered in a previous post, this is one way to simulate an inhomogeneous (or nonhomogeneous) Poisson point process.

Efficiency

Basic probability theory tells us that the number of experiment runs (Steps 1 to 3) until acceptance is a geometric variable with parameter \(\alpha\). On average the acceptance(-rejection) method will take \(1/\alpha\) number of simulations to sample one value of the random \(X\) of the target distribution. The key then is to make the proposal density \(q(x)\) as small as possible (and adjust \(M\) accordingly), while still keeping the inequality \(p(x) \leq M q(x)\).

Higher dimensions

The difficulty of the acceptance(-rejection) method is finding a good proposal distribution such that the product \(Mq(x)\) is not much larger than the target density \(p(x)\). In one-dimension, this can be often done, but in higher dimensions this becomes increasingly difficult. Consequently, this method is typically not used in higher dimensions.

Another approach with an acceptance step is the Metropolis-Hastings method, which is the quintessential Markov chain Monte Carlo (MCMC) method. This method and its cousins have become exceedingly popular, as they give ways to simulate collections of dependent random variables that have complicated (joint) distributions.

Further reading

The original paper where the acceptance(-rejection) method appears (on page 769 in the right-hand column) is:

  • von Neumann, Various techniques used in connection with random digits, 1951.

The usual books on stochastic simulations and Monte Carlo methods will detail this method. For example, see the book by Devroye (Section II.3) or the more recent Handbook of Monte Carlo Methods (Section 3.1.5) by Kroese, Taimre and Botev. The book Stochastic Simulation: Algorithms and Analysis by Asmussen and Glynn also covers the method in Section 2b.

Other books include those by Fishman (Section 8.5) and Gentle (Section 4.5) respectively.

(Visited 391 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *