## The Standard Model of wireless networks

In the previous post I discussed the signal-to-interference-plus ratio or SIR in wireless networks. If noise is included, then then signal-to-interference-plus-noise ratio or just SINR. But I will just write about SIR, as most results that hold for SIR, will also hold for SINR without any great mathematical difficulty.

The SIR is an important quantity due to reasons coming from information theory.  If you’re unfamiliar  with it, I suggest reading the previous post.

In this post, I will describe a very popular mathematical model of the SIR, which I like to call the standard model. (This is not a term used in the literature as I have borrowed it from physics.)

## Definition of SIR

To define the SIR, we consider a wireless network of $$n$$ transmitters with positions located at $$X_1,\dots,X_n$$ in some region of space. At some location $$x$$, we write $$P_i(x)$$ to denote the power value of a signal received at $$x$$ from transmitter  $$X_i$$. Then at location $$x$$, the SIR with respect to transmitter $$X_i$$ is
$$\text{SIR}(x,X_i) := \frac{P_i(x)}{\sum\limits_{j\neq i} P_j(x)} .$$

Researchers usually attempt to represent the received power of the signal $$P_i(x)$$ with a propagation model. This mathematical model  consists of a random and a deterministic component given by
$$P_i(x)=F_i\ell(|X_i-x|) ,$$
where $$\ell(r)$$ is a non-negative function in $$r\geq 0$$ and $$F_i$$ is a non-negative random variable. The function $$\ell(r)$$  is often called the path loss function. The random variables represent random fading or shadowing.

## Standard model

Based on the three model components of fading, path loss, and transmitter locations, there are many combinations possible. That said, researchers generally (I would guess, say, 90 percent or more) use a single combination, which I call the standard model.

The three standard model assumptions are:

1. Singular power law path loss $$\ell(r)=(\kappa r)^{-\beta}$$.
2. Exponential distribution for fading variables, which are independent and identically distributed (iid).
3. Poisson point process for transmitter locations.

Why these three? Well, in short, because they work very well together. Incredibly, it’s sometimes possible to get relatively a simple  mathematical expression for, say, the coverage probability $$\mathbb{P}[\text{SIR}(x,X_i)>\tau ]$$, where $$\tau>0$$.

I’ll now detail the reasons more specifically.

### Path loss

The $$\ell(r)=(\kappa r)^{-\beta}$$ is very simple, despite having a singularity at $$r=0$$. This allows simple algebraic manipulation of equations.

Some, such as myself, are initially skeptical of this function as it gives an infinitely strong signal at the transmitter due to the singularity in the function $$\ell(r)=(\kappa r)^{-\beta}$$. More specifically, the path loss of the signal from transmitter $$X_i$$ approaches infinity as $$x$$ approaches $$X_i$$ .

But apparently, overall, the singularity does not have a significant impact on most mathematical results, at least qualitatively. That said, one still observe consequences of this somewhat physically unrealistic model assumption. And I strongly doubt enough care is taken by researchers to observe and note this.

Interestingly, the original reason why exponential variables were used is because it allowed the SIR problem to be reformulated into a problem of a Laplace transform of a random variable, which for a random variable $$Y$$ is defined as

$$\mathcal{L}_Y(t)=\mathbb{E}(e^{- Y t}) \, .$$

where $$t\geq 0$$. (This is essentially the moment-generating function with $$-t$$ instead of $$t$$.)

The reason for this connection is that the tail distribution of an exponential variable $$F$$ with mean $$\mu$$  is simply $$\mathbb{P}(F>t)= e^{-t/\mu}$$.  In short, with the exponential assumption, various conditioning arguments eventually lead to Laplace transforms of random variables.

### Transmitters locations

No prizes for guessing that researcher overwhelmingly use a (homogeneous) Poisson point process for the transmitter (or receiver) locations. When developing mathematical models with point processes, if you can’t get any results with the Poisson point process, then abandon all hope.

It’s the easier to work with this point process due to its independence property, which leads to another useful property. For Poisson point process, the Palm distribution is known, which is the distribution of a point process conditioned on a point (or collection of points) existing in a specific location of the underlying space on which the point process is defined.  In general, the Palm distribution is not known for many point processes.

## Random propagation effects can lead to Poisson

A lesser known reason why researchers would use the Poisson point process is that, from the perspective of a single observer in the network, it can be used to capture the randomness in the signal strengths.  Poisson approximation results in probability imply that randomly perturbing the signal strengths can make signals appear more Poisson, by which I mean  the signal strengths behave stochastically or statistically as though they were created by a Poisson network of transmitters.

The end result is that a non-Poisson network can appear more Poisson, even if the transmitters do not resemble (the realization of) a Poisson point process. The source of randomness that makes a non-Poisson network appear more Poisson is the random propagation effects of fading, shadowing, randomly varying antenna gains, and so on, or some combination of these.

A good starting point on this topic is the Wikipedia article Stochastic geometry models of wireless networks. This paper is also good:

• 2009, Haenggi, Andrews, Baccelli, Dousse, Franceschetti, Stochastic Geometry and Random Graphs for the Analysis and Design of Wireless Networks.

This paper by my co-authors and I has some details on standard model and why a general network model behaving Poisson in terms of the signal strengths:

• 2018, Keeler, Ross and Xia, When do wireless network signals appear Poisson?.

Early books on the subject include the two-volume textbooks  Stochastic Geometry and Wireless Networks by François Baccelli and Bartek Błaszczyszyn, where the first volume is on theory and the second volume is on applications.  Martin Haenggi wrote a very readable introductory book called Stochastic Geometry for Wireless networks.

Finally, I co-wrote with Bartek Błaszczyszyn, Sayan Mukherjee, and Martin Haenggi a short monograph on SINR models called Stochastic Geometry Analysis of Cellular Networks, which is written at a slightly more advanced level. This book has a section on why signal strengths appear Poisson.

## Simulating Poisson random variables – Survey of methods

In the previous post, I discussed how to sample or generate Poisson random variables or (more correctly) variates. I detailed a direct method that uses the fact that a Poisson stochastic process, which is directly related to a Poisson point process, has inter-arrival times that form independent and identically distributed exponential variables. This method in turn can be easily reformulated so it only uses (standard) uniform variables to generate Poisson random variables. It is an easy and popular sampling method, explaining why it is often used.

As elegant and precise as this simulation method is, it unfortunately decreases in speed as the Poisson parameter $$\lambda$$ increases. In a tutorial published in 1983, Brian D. Ripely, a major figure in spatial statistics, says this about the direct method:

This is simple, but has expected time proportional to $$\lambda$$. Some of its competitors use rejection methods with the envelope distribution that of the integer part of a continuous random variable, such as logistic, Laplace and normal mixed with exponential distributions.

We recall that acceptance-rejection or rejections methods involve simulating a random object, such as a random variable, by first simulating another random object of the same type that is easier to simulate, and then accepting or rejecting these random variables or variates based on a certain ratio. The distribution of the random object that is first simulated is called the envelope distribution. Such rejection methods are one way to simulate Poisson variables.

Consequently, the code of most computer functions for generating Poisson variables will have an if-statement, using the direct method for small parameter values and another method for large parameter values.

### Different methods

Over the years there have been different methods proposed for producing Poisson random variates. In the book Non-uniform random variate generation, Luc Devroye groups the different methods into five categories with his views, which I’ll briefly describe here:

1. Direct methods based on the homogeneous Poisson stochastic process having exponential inter-arrival times. These methods are simple, but the expected time is proportional to the Poisson parameter $$\lambda$$.
2. Inversion methods that search through a table of cumulative Poisson probabilities. Examples include the papers by Fishman (1976) and Atkinson (1979)*.
3. Methods that use the recursive properties of the Poisson distribution. The paper by Ahrens and Dieter (1974) uses this approach, and its expected time of completion is proportional to $$\log\lambda$$.
4. Acceptance-rejection (or rejection) methods that give relatively fast but simple algorithms. Such methods are proposed in the papers by Atkinson (1979)*, Ahrens and Dieter (1980) and Devroye (1981) or the technical report by Schmeiser and Kachitvichyanukul (1981).
5. Acceptance-complement methods that uses a normal distribution as the starting distribution, such as the paper by Ahrens and Dieter (1982). This method is fast, but the code is rather long.

*Atkinson had (at least) two papers on generating Poisson variates published in 1979, but I believe Devroye is referring to the first paper, because in the second paper Atkinson compares methods proposed by others.

For titles of the papers, see the Further reading section below.

## Methods implemented

In this section, I’ll state which proposed methods are used in various programming languages and numerical methods. I won’t go into the details how the methods work, as I’ll just cite the papers instead.

##### MATLAB

For small $$\lambda$$ values, the MATLAB function poissrnd uses the  direct method (based on inter-arrival times) with a while-loop.

For $$\lambda$$ values greater than fifteen, I believe that the MATLAB function poissrnd uses Algorithm PG from the 1974 paper by Ahrens and Dieter. But to come to this conclusion, I had to do some investigating. You can skip to the next section if you’re not interested, but now I’ll explain my reasoning.

The MATLAB documentation says it uses a method proposed by Ahrens and Dieter, but these two researchers developed a number of methods for generating Poisson variables. The MATLAB code cites Volume 2 of the classic series by Knuth, who says the method is due to Ahrens and Dieter, but he doesn’t give an exact citation in that section of the book. Confusingly, Knuth cites in his book a couple papers by Ahrens and Dieter for generating different random variates. (Knuth later cites a seemingly relevant 1980 paper by Ahrens and Dieter, but that details another method.)

Both the MATLAB code and Knuth cite the book by Devroye. In his book (Exercise 3.5.2), Devroye discusses one method, among others, from a 1974 paper by Ahrens and Dieter. Another hint is given by examining the code of the MATLAB function poissrnd, which reveals that it uses the function randg to generate gamma variables. In the Ahrens and Dieter 1974 paper, their Algorithm PG (for producing Poisson variates) uses gamma random variables, and it’s suggested to use a parameter value of $$7/8$$, as used by MATLAB and mentioned by Knuth, confirming this is the right paper.

In summary, for large $$\lambda$$ the function MATLAB uses Algorithm PG from the 1974 paper by Ahrens and Dieter, whereas it uses the direct method for small values.

##### R

In R, the function rpois use an algorithm outlined in the 1982 paper by Ahrens and Dieter. You can view the R source code here. The two cases for $$\lambda$$ (or $$\mu$$ in the paper) depend on whether $$\lambda$$ is greater than ten or not. For small $$\lambda$$, the R function rpois does not use the method based on inter-arrival times, but rather an inversion method based on a table of (cumulative) probabilities given by the Poisson probability distribution.

##### Python (NumPy)

In NumPy, the function numpy.random.poisson generates Poisson variates. The source code for the NumPy library is here, but for Poisson function the underlying code is actually written in C; see the distributions.c file located here. For small Poisson parameter $$\lambda$$, the code uses the direct method; see the function random_poisson_mult in the code.

For Poisson parameter $$\lambda \geq 10$$, the comments in the code reveal that it uses a method from a 1993 paper by Hörmann; see Algorithm PTRS on page 43 of the paper. This is a transformation method, which for NumPy is implemented in the C code as the function random_poisson_ptrs. The method, which Hörmann calls the transformed rejection with squeeze, combines inversion and rejection methods.

##### Octave

Octave is intended to be a GNU clone of MATLAB, so you would suspect it uses the same methods as MATLAB for generating Poisson random variates. But the Octave function poissrnd uses different methods. The code reveals it generates the Poisson variates with a function called prand. It considers different cases depending on the value of the Poisson parameter $$\lambda$$ as well as whether a single variable (that is, a scalar) or vector or matrix of Poisson variates are being generated.

In total, the Octave function prand uses five different methods. For two of the methods, the documentation cites methods from the classic book Numerical Recipes in C (the 1992 edition); see next section. To generate a single Poisson variate with Poisson parameter $$\lambda \leq 12$$, the Octave function prand uses the direct method based on inter-arrival times.

##### Numerical Recipes (Fortran, C and C++)

The Numerical Recipes is a classic book by Press, Teukolsky, Vetterling and Flannery on numerical methods. The books comes in different editions reflecting different publication years and computer languages. (In the first two editions of the book, the authors implemented the algorithms respectively in Fortran and C.)

For generating Poisson variates, the book contents seems to have not changed over the editions that I looked at, which covered the programming languages Fortran (77 and 90), C, and C++. The authors cover Poisson generation in Section 7.3 in the Fortran and C editions. In the third edition of Numerical Recipes, they implement their methods in C++ in Section 7.3.12.

For small values of Poisson parameter $$\lambda$$, Numerical Recipes uses the direct method. For $$\lambda >12$$ values, an acceptance-rejection method is used, which relies upon finding a continuous version of the discrete Poisson probability distribution.

##### GSL Library (C)

In the GSL library, one can use the function gsl_ran_poisson, which uses the the direct method of exponential times. The code, which can be viewed here, cites simply Knuth (presumably the second volume).

##### NAG Library (C)

Although I didn’t see the code, it appears that the function nag_rand_poisson (g05tjc ) in the NAG library also uses the direct method, based on the material in the second volume of series by Knuth. But in a 1979 paper Atkinson says that the NAG library uses a method from the 1974 paper by Ahrens and Dieter.

##### MKL library (C)

In the MKL C library written by Intel, they seem to have three functions or methods for generating Poisson variates.

The first function is called VSL_RNG_METHOD_POISSON_PTPE, which does the following for a Poisson distribution with parameter $$\Lambda$$:

If Λ ≥ 27, random numbers are generated by PTPE method. Otherwise, a combination of inverse transformation and table lookup methods is used. The PTPE method is a variation of the acceptance/rejection method that uses linear (on the fraction close to the distribution mode) and exponential (at the distribution tails) functions as majorizing functions. To avoid time-consuming acceptance/rejection checks, areas with zero probability of rejection are introduced and a squeezing technique is applied.

This function uses the so-called PTPE method, which is outlined in a 1981 technical report by Schmeiser and Kachitvichyanukul.

The second function is called VSL_RNG_METHOD_POISSON_POISNORM, which does the following :

If Λ < 1, the random numbers are generated by combination of inverse transformation and table lookup methods. Otherwise, they are produced through transformation of the normally distributed random numbers.

The third function is called VSL_RNG_METHOD_POISSONV_POISNORM, which does the following:

If Λ < 0.0625, the random numbers are generated by inverse transformation method. Otherwise, they are produced through transformation of normally distributed random numbers.

## cuRAND (C)

Finally, Nvidia’s CUDA Random Number Generation library (cuRAND) has a function for generating Poisson variates. To see the C code, copies of it can be found in various GitHub repositories, such as this one. The cuRAND function curand_poisson uses the direct function for Poisson parameter values  less than 64. For parameters values greater than 4000, it uses a normal approximation (rounded to the nearest integer).

For other values, the function curand_poisson uses an a rejection method based on an approximation of the incomplete gamma function; see the function curand_poisson_gammainc. The book by Fishman is cited; see Section 8.16.

For various Poisson simulation methods, see the stochastic simulation books by Devroye (Section X.3) or Fishman (Section 8.16). The book by Gentle (Section 5.2.8) also briefly covers Poisson variables. Kuhl wrote a paper on the history of generating random variates. It has some interesting historical points, but I was not entirely impressed by the quality or detail of the paper.

Of course, it’s a good idea to look at the citations that the different functions use.

Finally, here is a list of the papers I mentioned in this post:

• 1974, Ahrens and Dieter, Computer methods for sampling from gamma, beta, poisson and bionomial distributions;
• 1976, Fishman, Sampling from the Poisson distribution on a computer;
• 1979, Atkinson, The computer generation of Poisson random variables;
• 1979, Atkinson, Recent developments in the computer generation of Poisson random variables;
• 1980, Ahrens and Dieter, Sampling from binomial and Poisson distributions: a method with bounded computation times;
• 1980, Devroye, The Computer Generation of Poisson Random Variables;
• 1981, Schmeiser and Kachitvichyanukul, Poisson Random Variate Generation;
• 1982, Ahrens and Dieter, Computer generation of Poisson deviates from modified normal distributions;
• 1983, Ripley, Computer Generation of Random Variables: A Tutorial;
• 1993, Hörmann, The transformed rejection method for generating Poisson random variable.

## Simulating Poisson random variables – Direct method

If you were to write from scratch a program that simulates a homogeneous Poisson point process, the trickiest part would be the random number of points, which requires simulating a Poisson random variable. In previous posts, here and here, I’ve simply used the inbuilt functions for simulating (or generating) Poisson random variables (or variates). (In the literature, researchers describe methods on how to generate random deviates or variates. But, in my informal way, I will often say simulate random variables or generate random variates, somewhat interchangeably.)

But how would one create such a function using just a standard uniform random variate generator?

The generator or method being used depends on the value of the Poisson parameter, denoted here by $$\lambda$$, which is the mean (as well as the variance) of a random variable with a Poisson distribution. If this parameter value is small, which in practice has been less than some number between 10 to 30, then a direct simulation method can be used to generate Poisson random variates.

For large $$\lambda$$ values, other methods are generally used, such as rejection or (highly accurate) approximation methods. In the book Non-uniform random variate generation, the author Luc Devroye groups the methods into five categories (Section X.3.2), which I briefly describe in the next post. The first of these categories covers the method that I mentioned above, which is what I’ll cover in this post, with some code in C and C#. (I also present some code in MATLAB, but you would never use it instead of the the inbuilt function poissrnd.)

In the next post, I’ll describe other types of Poisson simulation methods, and I’ll detail which simulation methods various programming libraries use.

### Direct method

An elegant and natural method for simulating Poisson variates is using a result based on the homogeneous Poisson stochastic process. The points in time when a given homogeneous Poisson stochastic process increases forms a Poisson point process on the real line. Confusingly, the term Poisson process is often used to refer to both the stochastic process and the point process, but there is a slight difference.

##### Using exponential random variables

The times when the Poisson stochastic process increases are called arrival times or occurrence times, as in classic stochastic models they represent the arrivals or occurrences of something, such as phone calls over time. The differences between consecutive times are called inter-arrival times or inter-occurrence times. The inter-arrival times of a homogeneous Poisson process form independent exponential random variables, a result known as the Interval Theorem.

Using this connection to the Poisson stochastic process, we can generate exponential variables $$E_1$$, $$E_2, \dots$$, and add them up. The smallest number of exponential variables for the resulting sum to exceeds one will give a Poisson random variable. That is, if we define $$N$$ to be the smallest $$n$$ such that
$$\sum_{k=1}^n E_k > 1,$$
then $$N$$ is a random variable distributed according to a Poisson distribution.

Generating exponential variates is easily done by using the inverse method. For a uniform random variable $$U$$ on the unit interval $$(0,1)$$, the transformation $$E= -\log(U)/\lambda$$ gives an exponential random variable with mean $$1/\lambda$$.

##### Using uniform random variables

To reduce computation overhead, the direct method using exponential random variables is often reformulated as products of uniform random variables, due to logarithmic identities, instead of sums of exponential random variables. Then using standard uniform random variables $$U_1, U_2,\dots$$, we define $$N$$ to be the smallest $$n$$ such that
$$\prod_{k=1}^n U_k > e^{-\lambda}.$$

These two different formulations of the same method are captured by Lemma 3.2 and Lemma 3.3 in Chapter 10 of Devroye’s book.

##### Example in MATLAB

In MATLAB, we can implement this method with the first formulation in a function with a simple while-loop:

function N=funPoissonLoop(lambda)
T=0; %initialize sum of exponential variables as zero
n=-1;%initialize counting variable as negative one

while (T < 1)
E=-(1/lambda)*log(rand(1));%generate exponential random variable
T=T+E; %update sum of exponential variables
n=n+1; %update number of exponential variables
end
N=n;
end


But, as I said before, don’t use this code instead of the inbuilt function poissrnd.

If you want to be a bit more tricky, you could achieve the same result by using recursion:

function N=funPoissonRecursive(lambda)
T=0; %initialize sum of exponential variables as zero
n=-1; %initialize counting variable as negative one

%run (recursive) exponential function step function
[~,N]=funStepExp(lambda,T,n);

function [T,N]=funStepExp(nu,S,m)
if (S < 1)
%run if sum of exponential variables is not high enough

%generate exponential random variable
E=(-log(rand(1)))/nu;
S=S+E; %update sum of exponential variables
m=m+1; %update nunber of exponential variables

%recursively call function again
[T,N]=funStepExp(nu,S,m);
else
T=S;
N=m;
end
end
end


Note how the code recursively calls the function funStepExp, which generates an exponential variable each time.

In the Code section below I describe my code in C and C#, using the second formulation.

### Origins

Some people attribute the direct method, based on inter-arrival times, to (or, at least, cite) Donald Knuth, who details it in the second volume of his classic series of books, but I doubt that the great Knuth was the first to have this idea. For example, a quick search on Google Scholar found a paper  by K. D. Tocher on computers and random sampling, where Tocher proposes the direct method in 1954, some years before Knuth started publishing his classic series.

The direct method for Poisson sampling relies upon the Interval theorem. Günter Last studied its history  in a recent book authored by him and Matthew Penrose; see Chapter 7 and its corresponding historical footnotes in Section C of the appendix. (A free version of the book can be found here. ) People connected to the result include Robert Ellis and William Feller.

### Other methods

The direct method perfectly generates Poisson random variables (or I should say Poisson random variates). But it can be too slow for large values of the Poisson parameter (that, is the mean) $$\lambda$$. This has motivated researchers to develop other methods, which I will mention in the next post.

### Code

I wrote some code that simulates Poisson random variables by employing the direct method based on exponential inter-arrival times. I have implemented the second formulation (using just uniform variables) in the C and C# languages. In the code, I have used a while-loop to implement the method, but I could have also used a recursion method, as I did in the MATLAB example above, which I have also done in Python (with NumPy).

For an empirical test, the code also calculates the mean and variance of a collection of Poisson variables. For a large enough number of variables, the sample mean and the variance will closely agree with each other, converging to the same value.

##### C

Warning: My C code uses rand(), the standard pseudo-random number function in C, which is known for failing certain tests of randomness. The function is adequate for regular simulation work, but gives poor results for large number of simulations. Replace this function with another pseudo-random number generator.

The code for generating a single Poisson variate is fairly straightforward in C. Here’s a sample of the code with just the Poisson function:

//Poisson function -- returns a single Poisson random variable
int funPoissonSingle(double lambda)
{
double exp_lambda = exp(-lambda); //constant for terminating loop
double randUni; //uniform variable
double prodUni; //product of uniform variables
int randPoisson = -1; //Poisson variable

//initialize variables
randPoisson = -1;
prodUni = 1;
do
{
randUni = funUniformSingle(); //generate uniform variable
prodUni = prodUni * randUni; //update product
randPoisson++; //increase Poisson variable

} while (prodUni > exp_lambda);
return randPoisson;
}


For generating multiple variates, the code becomes more complicated, as one needs to use pointers, due to the limited or static memory capabilities of C. Again, the function uses the pseudo-random number generator in C.

##### C#

The code for generating a single Poisson variate is also straightforward in C#. Here’s the function in C#:

//Poisson function -- returns a single Poisson random variable
public int funPoissonSingle (double lambda) {
double exp_lambda = Math.Exp (-lambda); //constant for terminating loop
double randUni; //uniform variable
double prodUni; //product of uniform variables
int randPoisson = -1; //Poisson variable

//initialize variables
randPoisson = -1;
prodUni = 1;
do {
randUni = funUniformSingle (); //generate uniform variable
prodUni = prodUni * randUni; //update product
randPoisson++; // increase Poisson variable

} while (prodUni > exp_lambda);

return randPoisson;
}


Generalizing the code so it generates multiple variates just requires little change, compared to C, as the C# language is a much more modern language.

In this post on generating Poisson variates, John D. Cook briefly discusses the direct method for small $$\lambda$$ values and a rejection method from a 1979 paper by Atkinson, which I will mention in the next post. He presents his C# sharp code in this post.