Simulating Poisson random variables with large means in C

When sampling Poisson random variables, the method you use depends on the Poisson parameter, which coincides with its mean. The direct method, as I call it, is fine for small Poisson parameter values, but it becomes too slow as this parameter grows.

For large Poisson parameter values, researchers have proposed many other methods that are not slow. I coded up in Python and MATLAB (for now) two of these methods from the two respective papers:

  • 1979, Atkinson, The computer generation of Poisson random variables;
  • 1993, Hörmann, The transformed rejection method for generating Poisson random variable.

I already discussed these two methods in detail in a previous post, where you can find the code here.

The code I wrote is only suitable for large parameter values, where the meaning of large depends on the procedure being used, but it’s typically around 30. In other words, any code for generating Poisson variables should have an if-statement, using the direct method for small parameter values and another method, such as the ones above, for large parameter values.

Algorithms

Both algorithms are (acceptance-)rejection methods. I discuss this general method, first proposed by Neumann, in another post. In context of generating Poisson variables, these methods are, according to the book by Devroye (page 502), known for being relatively simple to implement and fast to execute.

Both algorithms require calculating the log of a factorial or, equivalently, a log of a gamma function, which is done by using approximations, such those of Stirling or Lanczos. I used the one by Stirling.

Algorithm PA  by Atkinson (1979)

The Algorithm PA proposed by Atkinson, among other methods, is a rejection method that uses a logistic distribution as the envelope distribution. (Often such algorithms use a normal distribution as the envelope distribution.)

Algorithm PTRS by Hörmann (1993)

Hörmann refers to the Algorithm PTRS method as a transformed rejection method. It uses the inverse method, which I covered in a previous post, and a rejection method.

I have only seen one implementation of this algorithm. It’s written in C for the Python library NumPy; see the code here.  You’ll notice my C code and that C code is very similar, modulus some logarithm identities.

Possible (small) error: I noticed that in that C code, on line 591, the implementation of step 2.0 of the PTRS Algorithm has a strictly less than condition, so it’s \(k <0\), whereas in the original paper (and hence my code),  the condition is \(k\leq 0\). Perhaps this possible error is insignificant, as the procedure is for large-valued Poisson random variables, so the \(k=0\) scenario rarely happens.

Code

I used just a moment ratio to test the code. A proper test would involve a chi-squared test, for example, which is what I did when I wrote the same algorithms in Python and MATLAB; see this post here.

Warning: My C code is only for illustration purposes. If you can use an industrial library, use that.

Algorithm PA  by Atkinson (1979)

/*
This code generates Poisson variates (or simulates Poisson variables).
using a method designed for large (>30) Poisson parameter values.

The generation method is Algorithm PA, a type of rejection method, from 
the paper:

1979 - Atkinson - "The Computer Generation of Poisson Random Variables"


Author: H. Paul Keeler, 2020.
Website: hpaulkeeler.com
Repository: github.com/hpaulkeeler/posts
*/

static unsigned int funPoissonPA(double mu)
{
    // precalculate some Poisson-parameter-dependent numbers
    double c = 0.767 - 3.36 / mu;
    double beta = pi / sqrt(3.0 * mu);
    double alpha = beta * mu;
    double k = log(c) - mu - log(beta);
    double log_mu = log(mu);

    // declare variables for the loop
    double U, x, V, y, logfac_n, lhs, rhs;
    unsigned int n;
    unsigned int randPoisson = 0; // initialize the Poisson random variable (or variate)
    bool booleContinue = true;
    while (booleContinue)
    {
        U = funUniform(); // generate first uniform variable
        x = (alpha - log((1.0 - U) / U)) / beta;

        if (x < -.5)
        {
            continue;
        }
        else
        {
            V = funUniform(); // generate second uniform variable
            n = floor(x + .5);
            y = alpha - beta * x;
            logfac_n = funLogFac(n);

            // two sides of an inequality condition
            lhs = y + log(V / (1.0 + exp(y)) / (1.0 + exp(y)));
            rhs = k + n * log_mu - logfac_n; // NOTE: uses log factorial n

            if (lhs <= rhs)
            {
                randPoisson = n;
                booleContinue = false;
                return randPoisson;
            }
            else
            {
                continue;
            }
        }
    }
    return randPoisson;
}

Algorithm PTRS by Hörmann (1993)

/*
This code generates Poisson variates (or simulates Poisson variables).
using a method designed for large (>10) Poisson parameter values.

The generation method is Algorthm PTRS, a type of rejection method, from
the paper:

1993 - Hörmann - "The transformed rejection method for generating Poisson
random variables"

Author: H. Paul Keeler, 2020.
Website: hpaulkeeler.com
Repository: github.com/hpaulkeeler/posts
*/

/*WARNING:
This code uses rand(), the standard pseudo-random number generating function in C,
which is known for producing inadequately random numbers.
Replace the function rand() in the function funUniformSingle with another standard uniform number generator.
 */

static unsigned int funPoissonPTRS(double mu)
{
    // precalculate some Poisson-parameter-dep}ent numbers
    double b = 0.931 + 2.53 * sqrt(mu);
    double a = -0.059 + 0.02483 * b;
    double vr = 0.9277 - 3.6224 / (b - 2);
    double one_over_alpha = 1.1239 + 1.1328 / (b - 3.4);

    // declare variables for the loop
    double U, V, us, log_mu, logfac_n, lhs, rhs;
    unsigned int n;

    unsigned int randPoisson = 0; // initialize the Poisson random variable (or variate)
    bool booleContinue = true;
    // Steps 1 to 3.1 in Algorithm PTRS
    while (booleContinue)
    {
        // generate two uniform variables
        U = funUniform();
        V = funUniform();

        U = U - 0.5;
        us = 0.5 - fabs(U);

        n = floor((2 * a / us + b) * U + mu + 0.43);

        if1
        {
            randPoisson = n;
            return randPoisson;
        }

        if2)
        {
            continue;
        }

        log_mu = log(mu);
        logfac_n = funLogFac(n);

        // two sides of an inequality condition
        lhs = log(V * one_over_alpha / (a / us / us + b));
        rhs = -mu + n * log_mu - logfac_n; // NOTE: uses log factorial n

        if (lhs <= rhs)
        {
            randPoisson = n;
            return randPoisson;
        }
        else
        {
            continue;
        }
    }
    return randPoisson;
}

Further reading

Algorithm PA was proposed in the paper:

  • 1979, Atkinson, The computer generation of Poisson random variables.

This algorithm is also given in the book Monte Carlo Statistical Methods by Casella and Robert; see Algorithm A.6.

Algorithm PTRS was proposed in the paper:

  • 1993, Hörmann, The transformed rejection method for generating Poisson random variable.
  1. us >= 0.07) && (V <= vr []
  2. n <= 0) || ((us < 0.013) && (V > us []

Signal strengths of a wireless network

In two previous posts, here and here, I discussed the importance of the quantity called the signal-to-interference ratio, which is usually abbreviated as SIR, for studying communication in wireless networks. In everyday terms, for a listener to hear a certain speaker in a room full of people speaking, the ratio of the speaker’s volume to the sum of the volumes of everyone else heard by the listener. The SIR is the communication bottleneck for any receiver and transmitter pair in a wireless network.

But the strengths (or power values) of the signals are of course also important. In this post I will detail how we can model them using a a simple network model with a single observer.

Propagation model

For a transmitter located at \(X_i\in \mathbb{R}^2\), researchers usually attempt to represent the received power of the signal \(P_i\) with a propagation model. Assuming the power is received at \(x\in \mathbb{R}^2\), this mathematical model consists of a random and a deterministic component taking the general form
$$
P_i(x)=F_i\,\ell(|X_i-x|) ,
$$
where \(\ell(r)\) is a non-negative function in \(r>0\) and \(F_i\) is a non-negative random variable.

The function \(\ell(r)\) is called the pathloss function, and common choices include \(\ell(r)=(\kappa r)^{-\beta}\) and \(\ell(r)=\kappa e^{-\beta r}\), where \(\beta>0\) and \(\kappa>0\) are model constants.

The random variables \(F_i\) represent signal phenomena such as multi-path fading and shadowing (also called shadow fading), caused by the signal interacting with the physical environment such as buildings. It is often called fading or shadowing variables.

We assume the transmitters locations \(X_1,\dots,X_n\) are on the plane \(\mathbb{R}^2\). Researchers typically assume they form a random point process or, more precisely, the realization of a random point process.

From two dimensions to one dimension

For studying wireless networks, a popular technique is to consider a wireless network from the perspective of a single observer or user. Researchers then consider the incoming or received signals from the entire network at the location of this observer or user. They do this by considering the inverses of the signal strengths, namely

$$
L_i(x): = \frac{1}{P_i}=\frac{1}{F_i \,\ell(|X_i-x|) }.
$$

Mathematically, this random function is simply a mapping from the two-dimensional plane \(\mathbb{R}^2\) to the one-dimensional non-negative real line \(\mathbb{R}_0^+=[0,\infty)\).

If the transmitters are located according to a non-random point pattern or a random point process, this random mapping generates a random point process on the non-negative real line. The resulting one-dimensional point process of the values \(L_1,L_2,\dots, \) has been called (independently) propagation (loss) process or path loss (with fading) process. More recently, my co-authors and I decided to call it a projection process, but of course the precise name doesn’t mattter

Intensity measure of signal strengths

Assuming a continuous monotonic path loss function \(\ell\) and the fading variables \(F_1, F_2\dots\) are iid, if the transmitters form a stationary random point process with intensity \(\lambda\), then the inverse signal strengths \(L_1,L_2,\dots \) form a random point process on the non-negative real line with the intensity measure \(M\).

$$
M(t) =\lambda \pi \mathbb{E}( [\ell(t F)^{-1} ]^2)\,,
$$

where \(\ell^{-1}\) is the generalized inverse of the function \(\ell\). This expression can be generalized for a non-stationary point process with general intensity measure \(\Lambda\).

The inverses \(1/L_1,1/L_2,\dots \), which are the signal strengths, forprocess with intensity measure

$$
\bar{M}(s) =\lambda \pi \mathbb{E}( [\ell( F/s)^{-1} ]^2).
$$

Poisson transmitters gives Poisson signal strengths

Assuming a continuous monotonic path loss function \(\ell\) and the fading variables \(F_1, F_2\dots\) are iid, if the transmitters form a Poisson point process with intensity \(\lambda\), then the inverse signal strengths \(L_1,L_2,\dots \) form a Poisson point process on the non-negative real line with the intensity measure \(M\).

If \(L_1,L_2,\dots \) form a homogeneous Poisson point process, then the inverses \(1/L_1,1/L_2,\dots \) will also form a Poisson point process with intensity measure \(\bar{M}(s) =\lambda \pi \mathbb{E}( [\ell( F/s)^{-1} ]^2). \)

Propagation invariance

For \(\ell(r)=(\kappa r)^{-\beta}\) , the expression for the intensity measure \(M\) reduces to
$$
M(t) = \lambda \pi t^{-2/\beta} \mathbb{E}( F^{-2/\beta})/\kappa^2.
$$

What’s striking here is that information of the fading variable \(F\) is captured simply by one moment \(\mathbb{E}( F^{-2/\beta}) \). This means that two different distributions will give the same results as long as this moment is matching. My co-authors and I have been called this observation propagation invariance.

Some history

To study just the (inverse) signal strengths as a point process on the non-negative real line was a very useful insight. It was made independently in these two papers:

  • 2008, Haenggi, A geometric interpretation of fading in wireless
    networks: Theory and applications;
  • 2010, Błaszczyszyn, Karray, and Klepper, Impact of the geometry, path-loss exponent and random shadowing on the mean interference factor in wireless cellular networks.

My co-authors and I presented a general expression for the intensity measure \(M\) in the paper:

  • 2018, Keeler, Ross and Xia, When do wireless network signals appear Poisson?.

This paper is also contains examples of various network models.

Further reading

A good starting point on this topic is the Wikipedia article Stochastic geometry models of wireless networks. The paper that my co-authors and I wrote has details on the projection process.

With Bartek Błaszczyszyn, Sayan Mukherjee, and Martin Haenggi, I co-wrote a short monograph on SINR models called Stochastic Geometry Analysis of Cellular Networks, which is written at a slightly more advanced level. The book puts an emphasis on studying the point process formed from inverse signal strengths, we call the projection process.