The Standard Model of wireless networks

In the previous post I discussed the signal-to-interference-plus ratio or SIR in wireless networks. If noise is included, then then signal-to-interference-plus-noise ratio or just SINR. But I will just write about SIR, as most results that hold for SIR, will also hold for SINR without any great mathematical difficulty.

The SIR is an important quantity due to reasons coming from information theory.  If you’re unfamiliar  with it, I suggest reading the previous post.

In this post, I will describe a very popular mathematical model of the SIR, which I like to call the standard model. (This is not a term used in the literature as I have borrowed it from physics.)

Definition of SIR

To define the SIR, we consider a wireless network of \(n\) transmitters with positions located at \(X_1,\dots,X_n\) in some region of space. At some location \(x\), we write \(P_i(x)\) to denote the power value of a signal received at \(x\) from transmitter  \(X_i\). Then at location \(x\), the SIR with respect to transmitter \(X_i\) is
$$
\text{SIR}(x,X_i) := \frac{P_i(x)}{\sum\limits_{j\neq i} P_j(x)} .
$$

Researchers usually attempt to represent the received power of the signal \(P_i(x)\) with a propagation model. This mathematical model  consists of a random and a deterministic component given by
$$
P_i(x)=F_i\ell(|X_i-x|) ,
$$
where \(\ell(r)\) is a non-negative function in \(r\geq 0\) and \(F_i\) is a non-negative random variable. The function \(\ell(r)\)  is often called the path loss function. The random variables represent random fading or shadowing.

Standard model

Based on the three model components of fading, path loss, and transmitter locations, there are many combinations possible. That said, researchers generally (I would guess, say, 90 percent or more) use a single combination, which I call the standard model.

The three standard model assumptions are:

  1. Singular power law path loss \(\ell(r)=(\kappa r)^{-\beta}\).
  2. Exponential distribution for fading variables, which are independent and identically distributed (iid).
  3. Poisson point process for transmitter locations.

Why these three? Well, in short, because they work very well together. Incredibly, it’s sometimes possible to get relatively a simple  mathematical expression for, say, the coverage probability \(\mathbb{P}[\text{SIR}(x,X_i)>\tau ]\), where \(\tau>0\).

I’ll now detail the reasons more specifically.

Path loss

The \(\ell(r)=(\kappa r)^{-\beta}\) is very simple, despite having a singularity at \(r=0\). This allows simple algebraic manipulation of equations.

Some, such as myself, are initially skeptical of this function as it gives an infinitely strong signal at the transmitter due to the singularity in the function \(\ell(r)=(\kappa r)^{-\beta}\). More specifically, the path loss of the signal from transmitter \(X_i\) approaches infinity as \(x\) approaches \(X_i\) .

But apparently, overall, the singularity does not have a significant impact on most mathematical results, at least qualitatively. That said, one still observe consequences of this somewhat physically unrealistic model assumption. And I strongly doubt enough care is taken by researchers to observe and note this.

Fading and shadowing variables

Interestingly, the original reason why exponential variables were used is because it allowed the SIR problem to be reformulated into a problem of a Laplace transform of a random variable, which for a random variable \(Y\) is defined as

$$
\mathcal{L}_Y(t)=\mathbb{E}(e^{- Y t}) \, .
$$

where \(t\geq 0\). (This is essentially the moment-generating function with \(-t\) instead of \(t\).)

The reason for this connection is that the tail distribution of an exponential variable \(F\) with mean \(\mu\)  is simply \(\mathbb{P}(F>t)= e^{-t/\mu}\).  In short, with the exponential assumption, various conditioning arguments eventually lead to Laplace transforms of random variables.

Transmitters locations

No prizes for guessing that researcher overwhelmingly use a (homogeneous) Poisson point process for the transmitter (or receiver) locations. When developing mathematical models with point processes, if you can’t get any results with the Poisson point process, then abandon all hope.

It’s the easier to work with this point process due to its independence property, which leads to another useful property. For Poisson point process, the Palm distribution is known, which is the distribution of a point process conditioned on a point (or collection of points) existing in a specific location of the underlying space on which the point process is defined.  In general, the Palm distribution is not known for many point processes.

Random propagation effects can lead to Poisson

A lesser known reason why researchers would use the Poisson point process is that, from the perspective of a single observer in the network, it can be used to capture the randomness in the signal strengths.  Poisson approximation results in probability imply that randomly perturbing the signal strengths can make signals appear more Poisson, by which I mean  the signal strengths behave stochastically or statistically as though they were created by a Poisson network of transmitters.

The end result is that a non-Poisson network can appear more Poisson, even if the transmitters do not resemble (the realization of) a Poisson point process. The source of randomness that makes a non-Poisson network appear more Poisson is the random propagation effects of fading, shadowing, randomly varying antenna gains, and so on, or some combination of these.

Further reading

A good starting point on this topic is the Wikipedia article Stochastic geometry models of wireless networks. This paper is also good:

  • 2009, Haenggi, Andrews, Baccelli, Dousse, Franceschetti, Stochastic Geometry and Random Graphs for the Analysis and Design of Wireless Networks.

This paper by my co-authors and I has some details on standard model and why a general network model behaving Poisson in terms of the signal strengths:

  • 2018, Keeler, Ross and Xia, When do wireless network signals appear Poisson?.

Early books on the subject include the two-volume textbooks Stochastic Geometry and Wireless Networks by François Baccelli and Bartek Błaszczyszyn, where the first volume is on theory and the second volume is on applications.  Martin Haenggi wrote a very readable introductory book called Stochastic Geometry for Wireless networks.

Finally, I co-wrote with Bartek Błaszczyszyn, Sayan Mukherjee, and Martin Haenggi a short monograph on SINR models called Stochastic Geometry Analysis of Cellular Networks, which is written at a slightly more advanced level. This book has a section on why signal strengths appear Poisson.

Simulating Poisson random variables with large means

There’s basically only one method for simulating Poisson variables with a small parameter value. This direct method, as I call it, uses the inter-arrival times of a homogeneous Poisson (stochastic) process, which I covered in a previous post.

But if large parameter, which coincides with its mean, is large, then this method becomes too slow, so you need to use other methods.  In another post, I briefly surveyed these methods and listed different language libraries that use them, ranging from open source projects NumPy and R to industry-level libraries MLK (by Intel) and cuRand (CUDA) by Nvidia.

To simulate large-mean Poisson random variables, I coded up in Python and MATLAB (for now) two of these methods from the two respective papers:

  • 1979, Atkinson, The computer generation of Poisson random variables;
  • 1993, Hörmann, The transformed rejection method for generating Poisson random variable.

You can find my code here.

Again, the code I wrote is only suitable for large parameter values, where the meaning of large depends on the procedure being used, but it’s typically around 30. Consequently, any code for generating Poisson variables should have an if-statement, using the direct method for small parameter values and another method, such as the ones above, for large parameter values.

Update: In another post I presented these two algorithms in C. You can find my code here.

Algorithms

I’ll give some light details of the two methods, which are referred to as PA and PTRS algorithms in the respective papers. I suggest reading the papers for further details.

Both algorithms are (acceptance-)rejection methods. I discuss this general method, first proposed by Neumann, in another post. In context of generating Poisson variables, these methods are, according to the book by Devroye (page 502), known for being relatively simple to implement and fast to execute.

Both algorithms require calculating the log of a factorial or, equivalently, a log of a gamma function, which is done by using approximations, such those of Stirling or Lanczos. This is a common approach in general, as you’ll find that computer functions, such as those in MATLAB or Python (NumPy), that give values for gamma functions and their logs are always based on approximations of the gamma function.  That means you can write your approximation for the log of a factorial or use a pre-existing function.

Algorithm PA  by Atkinson (1979)

The Algorithm PA proposed by Atkinson, among other methods, is a rejection method that uses a logistic distribution as the envelope distribution. (Often such algorithms use a normal distribution as the envelop distribution.)

After writing my code, I noticed that John D. Cook gave pseudo-code of the method in this post and then presented an implementation in C# in this post. Apart from that, I have not seen any implementations of this method.

Algorithm PTRS by Hörmann (1993)

Hörmann refers to the Algorithm PTRS method as a transformed rejection method. It uses the inverse method, which I covered in a previous post, and a rejection method.

I have only seen one implementation of this algorithm. It’s written in C for the Python library NumPy; see the code here.  You’ll notice my code and that C code is very similar, modulus some logarithm identities.

Possible (small) error: I noticed that in that C code, on line 591, the implementation of step 2.0 of the PTRS Algorithm has a strictly less than condition, so it’s \(k <0\), whereas in the original paper (and hence my code),  the condition is \(k\leq 0\). Perhaps this possible error is insignificant, as the procedure is for large-valued Poisson random variables, so the \(k=0\) scenario rarely happens.

Code

I only present here the code in Python, but you can go here to see the code implemented in MATLAB. I also wrote a script that tests if the generated variates (or simulated values) adhere to a Poisson distribution. This test compares the mean and variance (the ratio of which should be equal to one) and a chi-squared test.

Warning: My Python and MATLAB code is only for illustration purposes. In Python (with SciPy), MATLAB, or similar, you would use the pre-existing functions for simulating Poisson random variables.

Algorithm PA  by Atkinson (1979)

# This code generates Poisson variates (or simulates Poisson variables).
# using a method designed for large (>30) Poisson parameter values.
#
# The generation method is Algorithm PA, a type of rejection method, from 
# the paper:
#
# 1979 - Atkinson - "The Computer Generation of Poisson Random Variables"
#
# In practice, you should *always* use the built-in NumPy function
# random.poisson, which (for large Poisson parameter) uses Algorithm PTRS in the
# paper:
#
# 1993 - Hörmann - "The transformed rejection method for generating Poisson
# random variables"
#
# That method is also suggested by Knuth in Volume 2 of his classic
# series "The Art of Computer Programming".
#
# INPUT:
# mu is a single Poisson parameter (or mean) such that mu>=0.
# OUTPUT:
# result_k is a single Poisson variate (that is, an instance of a Poisson random
# variable), which is a non-negative integer.
#
# Author: H. Paul Keeler, 2019.
# Website: hpaulkeeler.com
# Repository: github.com/hpaulkeeler/posts

import numpy as np;  # NumPy package for arrays, random number generation, etc
import scipy 
from getLogFac import getLogFac # type: ignore

def funPoissonLargePA(mu):
    #precalculate some Poisson-parameter-dependent numbers
    c = 0.767 - 3.36/mu;
    beta = np.pi/np.sqrt(3.0*mu);
    alpha = beta*mu;
    k = np.log(c) - mu - np.log(beta);
    log_mu=np.log(mu);

    result_n=-1; #initialize the Poisson random variable (or variate)
    while (result_n<0):
        U = np.random.uniform(0, 1, 1); #generate first uniform variable
        x = (alpha - np.log((1.0 - U)/U))/beta;

        if (x <-.5):
            continue
        else:
            V = np.random.uniform(0, 1, 1); #generate second uniform variable
            n = np.floor(x+.5);
            y = alpha - beta*x;
            #logfac_n=getLogFac(n); 
            #above can be replaced with scipy function: 
            logfac_n=scipy.special.gammaln(n+1)

            #two sides of an inequality condition
            lhs = y + np.log(V/(1.0 + np.exp(y))**2);
            rhs = k + n*log_mu- logfac_n; # NOTE: uses log factorial n

            if (lhs <= rhs):
                result_n=n;
                return result_n;
            else:
                continue;

            #end if-statement

        #end if-statement
    #end while-loop

#end function

Algorithm PTRS by Hörmann (1993)

# This code generates Poisson variates (or simulates Poisson variables).
# using a method designed for large (>10) Poisson parameter values.
#
# The generation method is Algorthm PTRS, a type of rejection method, from
# the paper:
#
# 1993 - Hörmann - "The transformed rejection method for generating Poisson
# random variables"
#
# WARNING: This code is for illustration purposes only.
#
# In practice, you should *always* use the built-in NumPy function
# random.poisson, which (for large Poisson parameter) uses Algorithm PTRS in the
# above paper.
#
# INPUT:
# mu is a single Poisson parameter (or mean) such that mu>=0.
# OUTPUT:
# result_k is a single Poisson variate (that is, an instance of a Poisson random
# variable), which is a non-negative integer.
#
# Author: H. Paul Keeler, 2019.
# Website: hpaulkeeler.com
# Repository: github.com/hpaulkeeler/posts

import numpy as np;  # NumPy package for arrays, random number generation, etc
import scipy 
from getLogFac import getLogFac # type: ignore

def funPoissonLargePTRS(mu):
    #precalculate some Poisson-parameter-dependent numbers
    b = 0.931 + 2.53 * np.sqrt(mu);
    a =  -0.059 + 0.02483 * b;
    vr = 0.9277 - 3.6224 / (b - 2);
    one_over_alpha=1.1239 + 1.1328/(b - 3.4);

    result_n=-1; #initialize the Poisson random variable (or variate)
    #Steps 1 to 3.1 in Algorithm PTRS
    while (result_n<0):
        #generate two uniform variables
        U = np.random.uniform(0, 1, 1); 
        V = np.random.uniform(0, 1, 1); 

        U=U-0.5;
        us = 0.5 -  abs(U);

        n=np.floor((2 * a / us + b) * U + mu + 0.43);

        if (us>=0.07)&(V<=vr):
            result_n = n;
            return result_n;
        #end if-statement

        if (n<=0) |((us < 0.013) & ( V> us)):
            continue
        #end if-statement

        log_mu = np.log(mu);
        #logfac_n=getLogFac(n); 
        #above can be replaced with SciPy's function: 
        logfac_n = scipy.special.gammaln(n+1);

        #two sides of an inequality condition
        lhs = np.log(V * one_over_alpha / (a/us/us + b));
        rhs = -mu + n * log_mu - logfac_n ;# NOTE: uses log factorial n

        if lhs <= rhs:
            result_n = n;
            return result_n;
        else:
            continue
        #end if-statement


    #end while-loop

#end function

Further reading

Algorithm PA was proposed in the paper:

  • 1979, Atkinson, The computer generation of Poisson random variables.

This algorithm is also given in the book Monte Carlo Statistical Methods by Casella and Robert; see Algorithm A.6.

Algorithm PTRS was proposed in the paper:

  • 1993, Hörmann, The transformed rejection method for generating Poisson random variable.