Shot noise

Given a mathematical model based on a point process, a quantity of possible interest is the sum of some function applied to each point of the point process. This random sum is called shot noise, where the name comes from developing mathematical models of the noise measured in old electronic devices, which was likened to shot (used in guns) hitting a surface.

Researchers have long studied shot noise induced by a point process. One particularly application is wireless network models, in which the interference term is an example of shot noise. It is also possible to construct new point processes, called shot noise Cox point processes, by using based on the shot noise of some initial point process.

For such applications, we need a more formal definition of shot noise.

Definition

Shot noise of a point process

We consider a point processes \(\Phi=\{X_i\}_i\) defined on some space \(\mathbb{S}\), which is often \(\mathbb{R}^n\), and a non-negative function \(f\) with the domain \(\mathbb{S}\), so \(f:\mathbb{S} \rightarrow [0,\infty)\). This function \(f\) is called the response function.

Then the shot noise is defined as
$$
I= \sum_{X_i\in \Phi} f(X_i)\,.
$$

Shot noise of a marked point process

The previous definition of shot noise can be generalized by considering a marked point process \(\Phi’=\{(X_i, M_i)\}_i\), where each point \(X_i\) now has a random mark \(M_i\), which can be a random variable some other random object taking values in some space \(\mathbb{M}\). Then for a response function \(g:\mathbb{S}\times \mathbb{M} \rightarrow [0,\infty)\) , the shot noise is defined as
$$
I’= \sum_{(X_i, M_i)\in \Phi’} g(X_i,M_i)\,.
$$

Properties

Given a point process on a space, like the plane, at any point the shot noise is simply a random variable. If we consider a subset of the space, then shot noise forms a random field, where we recall that a random field is simply a collection of random variables indexed by some set. (By convention, the set tends to be Euclidean space or a manifold). The shot noise can also be considered as a random measure, for example
$$
I(B)= \sum_{X_i\in \Phi\cap B} f(X_i)\,,
$$
where \(B\subseteq \mathbb{S}\). This makes sense as the point process \(\Phi\) is an example of a random (counting) measure.

For Poisson point processes, researchers have studied resulting shot noise random variable or field. For example, given a homogeneous Poisson point process on \(\mathbb{R}^d\), if the response function is a simple power-law \(f(x)=|x|^{-\beta}\), where \(\beta> d\) and \(|x|\) denotes the Euclidean distance from the origin, then the resulting shot noise is alpha stable random variable with parameter \(\alpha=d/\beta\).

For a general point process \(\Phi\) with intensity measure \(\Lambda\), the first moment of the shot noise is simply
$$
\mathbb{E}(I)= \int_{\mathbb{S}} f(x) \Lambda (dx) \,.
$$

This is a result of Campbell’s theorem or formula. A similar expression exists for the shot noise of a marked point process.

Some history

Shot noise has been studied for over a century in science. In physics, Walter Schottky did research on shot noise in Germany at the beginning of the 20th century. In the same era, Norman R. Campbell studied shot noise in Britain and wrote two key papers, where one of them contains a result now called Campbell’s theorem or Campbell’s formula, among other names, which is a fundamental result in point process theory. Campbell was a physicist, but his work contains this mathematical result for which he credited the famed pure mathematician G. H. Hardy.

(It’s interesting to note that Hardy claimed years later that, given he did pure mathematics, none of his work would lead to applications, but that claim is simply not true for this and other reasons.)

The work on the physical process of shot noise motivated more probability-oriented papers on shot noise, including:

  • 1944, S. O. Rice, Mathematical Analysis of Random Noise;
  • 1960, Gilbert and Pollak, Amplitude distribution of shot noise;
  • 1971, Daley, The definition of a multi-dimensional generalization of shot noise;
  • 1977, J. Rice, On generalized shot noise;
  • 1990 Lowen and Teich, Power-law shot noise.

Further reading

As a model for interference in wireless networks, shot noise is covered in books such as the two-volume textbooks Stochastic Geometry and Wireless Networks by François Baccelli and Bartek Błaszczyszyn, where the first volume is on theory and the second volume is on applications. Martin Haenggi wrote a very readable introductory book called Stochastic Geometry for Wireless networks.

Point process notation

One tricky part about learning point processes is the use of different notation.  In this post I cover some basic notation used in the theory of point processes.

The reason for different notation is due to the different interpretations of a point process, which is where we will start. For those unfamiliar with them, I suggest the previous post for more details on the definition of a point process.

Interpretation

Historically, there’s a couple main interpretations of a point process, which is also called a random point field. The different interpretations partly explain the various terminology and notation used in the theory of point processes, but now a standard mathematical approach is used, as covered in more detail a previous post.

There are different ways to interpret a point process, which is often denoted by a single letter, for example, \(N\) or \(\Phi\).  If the point process is defined on a space like the real like, where the points can be ordered, then additional interpretations exist, but mathematicians assume the order of the points does not matter, limiting the possible interpretations.

Random measures

The now standard definition of a point process is given in terms of random measures.

A point process can be interpreted a random counting measure.

More specifically, a point process is defined as a mapping from a sample space \(\Omega\) to the space of counting measures \(\mathbb{M}\), meaning that each realization of a point process is a counting measure \(\phi\in \mathbb{M}\).

Notation

The standard interpretation of a point process as a random (counting) measure means that point process theory borrows heavily notation from measure theory and calculus. For example, in measure theory we can write a (non-random) counting measure as \(\#\), so \(\#(B)=n\) is how we write that the set \(B\) contains \(n\) points. We can then write the the number of points of a point process \(\Phi\) located in some (Borel) set \(B\) as \(N(B) =\#( B \cap \Phi)\), where \(N(B)\) is a random variable. In this expression, the point process is denoted by \(\Phi\), while\(N(B)\) is the number of points of \(\Phi\) in \(B\), meaning \(N\) is a random counting measure.

The main interpretations of point processes as random sets and counting measures is captured with the notation:

  • \(\Phi\) is a set of random points.
  • \(\Phi(B)\) is a random variable that gives the number of points of \(\Phi\) located in the (Borel) set \(B\).

This is the notation often used in point process theory. It implies
$$
\Phi(B) =\#(B \cap \Phi).
$$

We now look at how this notation is used in point process theory.

Sums

If \(f\) is some (measurable) function on the underlying space \(\mathbb{S}\), such as Euclidean space \(\mathbb{R}^d\), then we can write the sum of \(f(x)\) over all the points of a simple point process \(\Phi\) as
$$
\sum_{x\in \Phi}f(x)\,,
$$
where we are using the random set interpretation.

For any point process \(\Phi\), we can also write the sum as
$$
\int_{\mathbb{S}} f(x) \,\Phi(dx) \,,
$$
which highlights the interpretation of the point process \(\Phi\) as a random counting measure. Of course, we can use different integral notation, giving, for example, the expression
$$
\int_{\mathbb{S}} f \,d\Phi \,,
$$
which denotes the same sum.

We can illustrate the dual interpretation of a point process by writing the number of point of a simple point process \(\Phi\) existing in a set \(B\) as
$$
\Phi(B)= \sum_{x\in \Phi}1_B(x)\,,
$$
where the indicator function \(1_B(x) =1\) if the point \(x\) is exists in the set \(B\), and \(1_B(x) =0\) otherwise. In this setting, \(1_B(x)\) is also known as a Dirac measure, as it gives a measure of the set \(B\). We can see in this expression that the random measure interpretation is on the left-hand side, while the random set notation is on the right-hand side.

Expectations

We can write the average or expected value of a sum of functions over a simple point process \(\Phi\) as
$$
\mathbb{E}\left[\sum_{x\in \Phi}f(x)\right] \,,
$$
or for any point process \(\Phi\) as
$$
\int_{\textbf{N}}\sum_{x\in \Phi}f(x) \mathbb{P}(d\Phi)\,,
$$
where \(\mathbb{P}\) is an appropriate probability measure defined on the space of counting functions \(\textbf{N}\), thus illustrating the random measure interpretation.

We can write the expected value of \(\Phi(B)\), which is the definition of the intensity measure of a point process \(\Phi\), as
$$
\mathbb{E}[\Phi(B)]=\mathbb{E}\left( \sum_{x\in \Phi}1_B(x)\right) \qquad \text{or} \qquad \mathbb{E}[\Phi(B)]=\int_{\textbf{N}}\sum_{x\in \Phi}1_B(x) P(d\Phi) \,,
$$
which is also known as the mean measure or first moment measure of \(\Phi\).

Events

In probability we want to describe the behaviour of certain events, such as flipping at last three heads across ten coin flips. For point processes, events are simply configurations with a certain (geometric) property, such as no points existing in a certain region or all the points being a fixed minium distance from each other.

Typically, when being mathematically abstract, we denote an event with a single letter, such as \(\Gamma\). Then to denote that a point process satisfies this condition we write \(\Phi\in \Gamma\). In other words, the point process \(\Phi\) has the property \(\Gamma\). We can then write the probability of the event (or configuration) \(\Gamma\) of occurring as
$$
\mathbb{P}(\Gamma)= \mathbb{P}(\Phi\in \Gamma ) \,.
$$

Uppercase and subscript notation

The convention in probability is usually to denote random objects, such as random variables and point processes, with uppercase (or capital) letters. Conversely, a non-random object, such as the realization of a random variable or point process, is denoted by a lowercase letter. For example, \(\Phi\) is a point processes, while \(\phi\) is a point pattern, which may be a realization of the point process \(\Phi\).

With this convention, we can denote an arbitrary point process of a point process \(\Phi\) by \(X\), meaning \(X\in \Phi\). (But such a point is also a point on the underlying non-random space \(\mathbb{S}\) on which the point process \(\Phi\) is defined.) We also see lowercase used for the point, giving \(x\in \Phi\).

Sometimes subscripts are used to emphasize some type of numbering of points, giving, for example, two points \(X_1\in \Phi\) and \(X_2\in \Phi\). Sometimes authors will write something like

$$
\sum_{X_i\in \Phi}f(X_i)\,,
$$

but this redundant notation as \(X_i\) is a dummy variable, so you can omit the subscript in such an expression.

Some authors use a notation where the letter with and without a subscript denotes, respectively, the point process and a point belonging to the point process. Using this convention, we write, for example, \(X=\{ X_i\}_i\) and \(X_i\in X\).

Further reading

For point process theory, Wikipedia is a good start place to start, particularly the articles on point processes and point process notation, though the former is too mathematical for a Wikipedia article. The standard reference on point processes was the An Introduction to the Theory of Point Processes by Daley and Vere-Jones, which now spread across volume one and two, but I would not learn the subject with these books.

The classic text Stochastic Geometry and its Applications by Chiu, Stoyan, Kendall and Mecke covers point processes and the varying notation in Chapters 2 and 4. The similar material is covered in the previous edition by Stoyan, Kendall and Mecke. A more mathematical book that covers point processes and random sets is Stochastic and integral geometry by Schneider and Weil.

Point processes

A non-random collection of points located on some space is called a point pattern in spatial statistics. Informally, you can interpret a point process as a collection of random points scattered over some underlying mathematical space, meaning each outcome or realization of a point process forms a point pattern. Using such intuition gets you pretty far. But if we want to be mathematically formal, we need to use precise mathematical objects.

Historically, there’s a couple main interpretations of a point process, which is also called a random point field.  But now a standard mathematical approach is used.

In this post I will cover the main definitions, terminology and some of the notation used in the theory and application of point processes. (I refer to the next post for more on the point process notation.) I won’t delve too much into the precise details, giving just an outline with references at the end.

Underlying mathematical space

We consider a point process defined on some underlying mathematical space \(\mathbb{S}\), which is sometimes called the carrier space or state space. We further assume that the space is measurable by having a Borel \(\sigma\)-algebra \(\mathcal{S}\).

In practice, the underlying space is usually the real line \(\mathbb{R}\), the plane \(\mathbb{R}^2\), or some other familiar mathematical space like a square lattice. More generally, a point process can be defined on any metric space, allowing for the notion of distance. Mathematicians study point processes in even more general settings by defining them on, for example, a locally compact second countable Hausdorff space. But such generality is typically not needed for most people and their applications.

Modern probability approach

In modern probability theory, if we want to define a random mathematical object, we start with a random experiment in the context of a probability space or triple \((\Omega,\mathcal{A},\mathbb{P})\), where:

  1. \(\Omega\) is a sample space, which is the set of all possible outcomes;
  2. \(\mathcal{A}\) is a \(\sigma\)-algebra or \(\sigma\)-field, which is a family of events (subsets of \(\Omega\));
  3. \(\mathbb{P}\) is a probability measure, which assigns probability to each event in \(\mathcal{A}\).

To gain some intuition, David Williams says to imagine that Tyche, Goddess of Chance, chooses a sample point \(\omega\in\Omega\) at random according to the law \(\mathbb{P}\) such that an event \(A\in \mathcal{A}\) has a probability given by \(\mathbb{P}(A)\), where we understand probability with our own intuition. To bring things back to Earth, we can also choose a sample point \(\omega\in\Omega\) by using some physical experiment, as long as it is truly random, such that  the probability of \(A\in \mathcal{A}\) happening is given by \(\mathbb{P}(A)\).

Now we can define random objects by using a certain measurable function or mapping that maps to a suitable space of objects. For example, a real-valued random variable is a measurable function from \(\Omega\) to the real line; a random matrix is a measurable function from \(\Omega\) to some space of matrices; and, as John Kingman quips in his classic book Poisson Processes, a random elephant is just a measurable function from \(\Omega\) to some suitable space of elephants.

But what space should we use for a point process? To answer that, we need to interpret a point process as a suitable mathematical object.

Interpretation

There are different ways to interpret a point process, which is often denoted by a single letter, for example, \(N\) or \(\Phi\). (The convention of using the Greek letter \(\Phi\) comes from German mathematicians, but some prefer not to use \(\Phi\), as it’s often used for the normal cumulative distribution function.) If the point process is defined on a space like the real like, where the points can be ordered, then additional interpretations exist, but mathematicians assume the order of the points does not matter, limiting the possible interpretations.

Random closed set

In mathematics a collection of distinct things is formalized by a mathematical object called a set. We say that a set contains elements or members, and a set never contains more than one of the same element. Sets are fundamental objects with set concepts and notation being found everywhere in mathematics.

We now define a common type of point process, which we can formalize with the concept of a set.

A point process is simple if the probability of all points of the point process being distinct is one.

In other words, for a simple point process, there is zero probability of two or more of its points being found in the same location of the underlying state space \(\mathbb{S}\), which brings us to our first interpretation.

A simple point process can be interpreted as a random closed set.

More specifically, we can interpret a simple point process as a (measurable) mapping from a sample space \(\Omega\) to the space of closed sets \(\mathbb{F}\), meaning that each realization of a simple point process is a closed set \(\phi\in \mathbb{F}\).

Point process theory has adopted the notation from set theory. For example, if we want to say some point, which we denote by \(x\), of the underlying space \(\mathbb{S}\) belongs to or is a member of a simple point process \(\Phi\), then we can simply write \(x\in \Phi\). We can also write a point process as \(\{x\}_i\) to highlight its interpretation as a random closed set of points.

The theory of random sets, which is a field of study in its own right, can be applied to simple point processes owing to this interpretation. But for non-simple point processes, we need another point process interpretation.

Random measures

Modern integration theory is based on measure theory, which revolves around the concept of a set function known as a measure. In addition to a couple of other properties, when you apply this function to a set, it gives a number, such as a integer or real number. For example, a counting measure gives you the number elements in a set, which could be a subspace \(B\), such as a region of the plane \(B \subset \mathbb{R}^2\). (The letter \(B\) is often used for sets in measure and probability theory as it’s typically assumed that the sets are Borel sets, which form a very large family of well-behaved sets in terms of measurability.) The concept of a counting measure gives us an interpretation of a point process, which has now become the standard one.

A point process can be interpreted a random counting measure.

More specifically, we define a point process as a mapping from a sample space \(\Omega\) to the space of counting measures \(\mathbb{M}\), meaning that each realization of a point process is a counting measure \(\phi\in \mathbb{M}\). Some mathematicians even say a point process is just another name for a random counting measure. The techniques of random measure theory provide alternative (and arguably main) approach to study point processes.

This standard interpretation of a point process means that point process theory borrows heavily notation from measure theory and calculus. For example, in measure theory we can write a (non-random) counting measure as \(\#\), so \(\#(B)=n\) is how we write that the set \(B\) contains \(n\) points. We can then write the the number of points of a point process \(\Phi\) located in some (Borel) set \(B\) as \(N(B) =\#( B \cap \Phi)\), where \(N(B)\) is a random variable. In this expression, the point process is denoted by \(\Phi\), while\(N(B)\) is the number of points of \(\Phi\) in \(B\), meaning \(N\) is a random counting measure .

Important concepts

In the theory point processes, like any other field of mathematics, there are various important concepts for understanding and proving various results. Without going in the details, these include shot noise, Campbell’s theorem, Laplace functional, Palm calculus, void probability, and factorial moment measures. In future posts, I’ll detail some of these concepts.

Further reading

There are many, many books covering the fundamentals of modern probability theory, including those (in roughly order of difficulty) by Grimmett and Stirzaker, Karr, Rosenthal, Shiryaev, Durrett, and Billingsley. A very quick introduction is given in this web article.

For point process theory, Wikipedia is a good start place to start, particularly the articles on point processes and point process notation, though the former is too mathematical for a Wikipedia article. The standard reference on point processes was the An Introduction to the Theory of Point Processes by Daley and Vere-Jones, which now spread across volume one and two, but I would not learn the subject with these books.

The classic text Stochastic Geometry and its Applications by Chiu, Stoyan, Kendall and Mecke covers point processes and the varying notation in Chapters 2 and 4. The similar material is covered in the previous edition by Stoyan, Kendall and Mecke. A more mathematical book that covers point processes and random sets is Stochastic and integral geometry by Schneider and Weil. Point processes are also covered in a recent readable tutorial on Palm calculus and Gibbs point processes, which will be the subject of another post.

Spatial statistics builds of point process theory, giving good texts for learning the basics of point processes. I suggest the lectures notes by Baddeley, which form Chapter 1 of these published lectures, edited by Baddeley, Bárány, Schneider, and Weil. I always recommend the book Spatial Point Patterns: Methodology and Applications with R written by spatial statistics experts Baddeley, Rubak and Turner, which covers the spatial statistics (and point process simulation) R-package spatstat. Another good book is Statistical Inference and Simulation for Spatial Point Processes by Møller and Waagepetersen, but there are many more.

In recent years, point process theory (under the guise of stochastic geometry) has been used to model wireless networks. An early treatment of the subject is the two-volume textbook Stochastic Geometry and Wireless Networks by Baccelli and Błaszczyszyn, where the first volume is on theory and the second volume is on applications. Haenggi wrote a very readable introductory book called Stochastic Geometry for Wireless networks.

For the more mathematically brave, there’s the recent book Random Measures, Theory and Applications by Kallenberg who is the authority of the subject, having written earlier books, but these have now become obsolete with this recent publication. Another mathematically challenging book is The Theory of Random Sets by Molchanov, but it has less emphasis on point processes.

A very recent book (manuscript) is Random Measures, Point Processes, and Stochastic Geometry by Baccelli, Błaszczyszyn, and Karray, which contains much material, including new results and proofs. Finally, Last and Penrose wrote a mathematical monograph Lectures on the Poisson process, which is freely available online here.

Signal strengths of a wireless network

In two previous posts, here and here, I discussed the importance of the quantity called the signal-to-interference ratio, which is usually abbreviated as SIR, for studying communication in wireless networks. In everyday terms, for a listener to hear a certain speaker in a room full of people speaking, the ratio of the speaker’s volume to the sum of the volumes of everyone else heard by the listener. The SIR is the communication bottleneck for any receiver and transmitter pair in a wireless network.

But the strengths (or power values) of the signals are of course also important. In this post I will detail how we can model them using a a simple network model with a single observer.

Propagation model

For a transmitter located at \(X_i\in \mathbb{R}^2\), researchers usually attempt to represent the received power of the signal \(P_i\) with a propagation model. Assuming the power is received at \(x\in \mathbb{R}^2\), this mathematical model consists of a random and a deterministic component taking the general form
$$
P_i(x)=F_i\,\ell(|X_i-x|) ,
$$
where \(\ell(r)\) is a non-negative function in \(r>0\) and \(F_i\) is a non-negative random variable.

The function \(\ell(r)\) is called the pathloss function, and common choices include \(\ell(r)=(\kappa r)^{-\beta}\) and \(\ell(r)=\kappa e^{-\beta r}\), where \(\beta>0\) and \(\kappa>0\) are model constants.

The random variables \(F_i\) represent signal phenomena such as multi-path fading and shadowing (also called shadow fading), caused by the signal interacting with the physical environment such as buildings. It is often called fading or shadowing variables.

We assume the transmitters locations \(X_1,\dots,X_n\) are on the plane \(\mathbb{R}^2\). Researchers typically assume they form a random point process or, more precisely, the realization of a random point process.

From two dimensions to one dimension

For studying wireless networks, a popular technique is to consider a wireless network from the perspective of a single observer or user. Researchers then consider the incoming or received signals from the entire network at the location of this observer or user. They do this by considering the inverses of the signal strengths, namely

$$
L_i(x): = \frac{1}{P_i}=\frac{1}{F_i \,\ell(|X_i-x|) }.
$$

Mathematically, this random function is simply a mapping from the two-dimensional plane \(\mathbb{R}^2\) to the one-dimensional non-negative real line \(\mathbb{R}_0^+=[0,\infty)\).

If the transmitters are located according to a non-random point pattern or a random point process, this random mapping generates a random point process on the non-negative real line. The resulting one-dimensional point process of the values \(L_1,L_2,\dots, \) has been called (independently) propagation (loss) process or path loss (with fading) process. More recently, my co-authors and I decided to call it a projection process, but of course the precise name doesn’t mattter

Intensity measure of signal strengths

Assuming a continuous monotonic path loss function \(\ell\) and the fading variables \(F_1, F_2\dots\) are iid, if the transmitters form a stationary random point process with intensity \(\lambda\), then the inverse signal strengths \(L_1,L_2,\dots \) form a random point process on the non-negative real line with the intensity measure \(M\).

$$
M(t) =\lambda \pi \mathbb{E}( [\ell(t F)^{-1} ]^2)\,,
$$

where \(\ell^{-1}\) is the generalized inverse of the function \(\ell\). This expression can be generalized for a non-stationary point process with general intensity measure \(\Lambda\).

The inverses \(1/L_1,1/L_2,\dots \), which are the signal strengths, forprocess with intensity measure

$$
\bar{M}(s) =\lambda \pi \mathbb{E}( [\ell( F/s)^{-1} ]^2).
$$

Poisson transmitters gives Poisson signal strengths

Assuming a continuous monotonic path loss function \(\ell\) and the fading variables \(F_1, F_2\dots\) are iid, if the transmitters form a Poisson point process with intensity \(\lambda\), then the inverse signal strengths \(L_1,L_2,\dots \) form a Poisson point process on the non-negative real line with the intensity measure \(M\).

If \(L_1,L_2,\dots \) form a homogeneous Poisson point process, then the inverses \(1/L_1,1/L_2,\dots \) will also form a Poisson point process with intensity measure \(\bar{M}(s) =\lambda \pi \mathbb{E}( [\ell( F/s)^{-1} ]^2). \)

Propagation invariance

For \(\ell(r)=(\kappa r)^{-\beta}\) , the expression for the intensity measure \(M\) reduces to
$$
M(t) = \lambda \pi t^{-2/\beta} \mathbb{E}( F^{-2/\beta})/\kappa^2.
$$

What’s striking here is that information of the fading variable \(F\) is captured simply by one moment \(\mathbb{E}( F^{-2/\beta}) \). This means that two different distributions will give the same results as long as this moment is matching. My co-authors and I have been called this observation propagation invariance.

Some history

To study just the (inverse) signal strengths as a point process on the non-negative real line was a very useful insight. It was made independently in these two papers:

  • 2008, Haenggi, A geometric interpretation of fading in wireless
    networks: Theory and applications;
  • 2010, Błaszczyszyn, Karray, and Klepper, Impact of the geometry, path-loss exponent and random shadowing on the mean interference factor in wireless cellular networks.

My co-authors and I presented a general expression for the intensity measure \(M\) in the paper:

  • 2018, Keeler, Ross and Xia, When do wireless network signals appear Poisson?.

This paper is also contains examples of various network models.

Further reading

A good starting point on this topic is the Wikipedia article Stochastic geometry models of wireless networks. The paper that my co-authors and I wrote has details on the projection process.

With Bartek Błaszczyszyn, Sayan Mukherjee, and Martin Haenggi, I co-wrote a short monograph on SINR models called Stochastic Geometry Analysis of Cellular Networks, which is written at a slightly more advanced level. The book puts an emphasis on studying the point process formed from inverse signal strengths, we call the projection process.

The Standard Model of wireless networks

In the previous post I discussed the signal-to-interference-plus ratio or SIR in wireless networks. If noise is included, then then signal-to-interference-plus-noise ratio or just SINR. But I will just write about SIR, as most results that hold for SIR, will also hold for SINR without any great mathematical difficulty.

The SIR is an important quantity due to reasons coming from information theory.  If you’re unfamiliar  with it, I suggest reading the previous post.

In this post, I will describe a very popular mathematical model of the SIR, which I like to call the standard model. (This is not a term used in the literature as I have borrowed it from physics.)

Definition of SIR

To define the SIR, we consider a wireless network of \(n\) transmitters with positions located at \(X_1,\dots,X_n\) in some region of space. At some location \(x\), we write \(P_i(x)\) to denote the power value of a signal received at \(x\) from transmitter  \(X_i\). Then at location \(x\), the SIR with respect to transmitter \(X_i\) is
$$
\text{SIR}(x,X_i) := \frac{P_i(x)}{\sum\limits_{j\neq i} P_j(x)} .
$$

Researchers usually attempt to represent the received power of the signal \(P_i(x)\) with a propagation model. This mathematical model  consists of a random and a deterministic component given by
$$
P_i(x)=F_i\ell(|X_i-x|) ,
$$
where \(\ell(r)\) is a non-negative function in \(r\geq 0\) and \(F_i\) is a non-negative random variable. The function \(\ell(r)\)  is often called the path loss function. The random variables represent random fading or shadowing.

Standard model

Based on the three model components of fading, path loss, and transmitter locations, there are many combinations possible. That said, researchers generally (I would guess, say, 90 percent or more) use a single combination, which I call the standard model.

The three standard model assumptions are:

  1. Singular power law path loss \(\ell(r)=(\kappa r)^{-\beta}\).
  2. Exponential distribution for fading variables, which are independent and identically distributed (iid).
  3. Poisson point process for transmitter locations.

Why these three? Well, in short, because they work very well together. Incredibly, it’s sometimes possible to get relatively a simple  mathematical expression for, say, the coverage probability \(\mathbb{P}[\text{SIR}(x,X_i)>\tau ]\), where \(\tau>0\).

I’ll now detail the reasons more specifically.

Path loss

The \(\ell(r)=(\kappa r)^{-\beta}\) is very simple, despite having a singularity at \(r=0\). This allows simple algebraic manipulation of equations.

Some, such as myself, are initially skeptical of this function as it gives an infinitely strong signal at the transmitter due to the singularity in the function \(\ell(r)=(\kappa r)^{-\beta}\). More specifically, the path loss of the signal from transmitter \(X_i\) approaches infinity as \(x\) approaches \(X_i\) .

But apparently, overall, the singularity does not have a significant impact on most mathematical results, at least qualitatively. That said, one still observe consequences of this somewhat physically unrealistic model assumption. And I strongly doubt enough care is taken by researchers to observe and note this.

Fading and shadowing variables

Interestingly, the original reason why exponential variables were used is because it allowed the SIR problem to be reformulated into a problem of a Laplace transform of a random variable, which for a random variable \(Y\) is defined as

$$
\mathcal{L}_Y(t)=\mathbb{E}(e^{- Y t}) \, .
$$

where \(t\geq 0\). (This is essentially the moment-generating function with \(-t\) instead of \(t\).)

The reason for this connection is that the tail distribution of an exponential variable \(F\) with mean \(\mu\)  is simply \(\mathbb{P}(F>t)= e^{-t/\mu}\).  In short, with the exponential assumption, various conditioning arguments eventually lead to Laplace transforms of random variables.

Transmitters locations

No prizes for guessing that researcher overwhelmingly use a (homogeneous) Poisson point process for the transmitter (or receiver) locations. When developing mathematical models with point processes, if you can’t get any results with the Poisson point process, then abandon all hope.

It’s the easier to work with this point process due to its independence property, which leads to another useful property. For Poisson point process, the Palm distribution is known, which is the distribution of a point process conditioned on a point (or collection of points) existing in a specific location of the underlying space on which the point process is defined.  In general, the Palm distribution is not known for many point processes.

Random propagation effects can lead to Poisson

A lesser known reason why researchers would use the Poisson point process is that, from the perspective of a single observer in the network, it can be used to capture the randomness in the signal strengths.  Poisson approximation results in probability imply that randomly perturbing the signal strengths can make signals appear more Poisson, by which I mean  the signal strengths behave stochastically or statistically as though they were created by a Poisson network of transmitters.

The end result is that a non-Poisson network can appear more Poisson, even if the transmitters do not resemble (the realization of) a Poisson point process. The source of randomness that makes a non-Poisson network appear more Poisson is the random propagation effects of fading, shadowing, randomly varying antenna gains, and so on, or some combination of these.

Further reading

A good starting point on this topic is the Wikipedia article Stochastic geometry models of wireless networks. This paper is also good:

  • 2009, Haenggi, Andrews, Baccelli, Dousse, Franceschetti, Stochastic Geometry and Random Graphs for the Analysis and Design of Wireless Networks.

This paper by my co-authors and I has some details on standard model and why a general network model behaving Poisson in terms of the signal strengths:

  • 2018, Keeler, Ross and Xia, When do wireless network signals appear Poisson?.

Early books on the subject include the two-volume textbooks Stochastic Geometry and Wireless Networks by François Baccelli and Bartek Błaszczyszyn, where the first volume is on theory and the second volume is on applications.  Martin Haenggi wrote a very readable introductory book called Stochastic Geometry for Wireless networks.

Finally, I co-wrote with Bartek Błaszczyszyn, Sayan Mukherjee, and Martin Haenggi a short monograph on SINR models called Stochastic Geometry Analysis of Cellular Networks, which is written at a slightly more advanced level. This book has a section on why signal strengths appear Poisson.

Signal-to-interference ratio in wireless networks

The fundamentals of information theory say that to successfully communicate across any potential communication link the signal strength of the communication must be stronger than that of the back ground noise, which leads to the fundamental quantity known as signal-to-noise ratio. Information theory holds in very general (or, in mathematical speak, abstract) settings. The communication could be, for example, a phone call on an old wired landline, two people talking in a bar, or a hand-written letter, for which the respective signals in these examples are the electrical current, speaker’s voice, and the writing. (Respective examples of noise could be, for example, thermal noise in the wires, loud music, or coffee stains on the letter.)

In wireless networks, it’s possible for a receiver to simultaneously detect signals from multiple transmitters, but the receiver typically only wants to receive one signal. The other unwanted or interfering signals form a type of noise, which is usually called interference, and the other (interfering) transmitters are called interferers. Consequently, researchers working on wireless networks study the signal-to-interference ratio, which is usually abbreviated as SIR. Another name for the SIR is carrier-to-interference ratio.

If we also include background noise, which is coming not from the interferers, then the quantity becomes the signal-to-interference-plus-noise ratio or just SINR. But I will just write about SIR, though jumping from SIR to SINR is usually not difficult mathematically.

The concept of SIR makes successful communication more difficult to model and predict, as it just doesn’t depend on the distance of the communication link. Putting the concept in everyday terms, for a listener to hear a certain speaker in a room full of people all speaking to the listener, it is not simply the distance to the speaker, but rather the ratio of the speaker’s volume to the sum of the volumes of everyone else heard by the listener. The SIR is the communication bottleneck for any receiver and transmitter pair in a wireless network.

In wireless network research, much work has been done to examine and understand communication success in terms of interference and SIR, which has led to a popular mathematical model that incorporates how signals propagate and the locations of transmitters and receivers.

Definition

To define the SIR, we consider a wireless network of transmitters with positions located at \(X_1,\dots,X_n\) in some region of space. At some location \(x\), we write \(P_i(x)\) to denote the power value of a signal received at \(x\) from transmitter \(X_i\). Then at location \(x\), the SIR with respect to transmitter \(X_i\) is
$$
\text{SIR}(x,X_i) :=\frac{P_i(x)}{\sum\limits_{j\neq i} P_j(x)} =\frac{P_i(x)}{\sum\limits_{j=1}^{n} P_j(x)-P_i(x)} .
$$

The numerator is the signal and the denominator is the interference.  This ratio tells us that increasing the number of transmitters \(n\) decreases the original SIR values. But then, in exchange, there is a greater number of transmitters for the receiver to connect to, some of which may have larger \(P_i(x)\) values and, subsequently, SIR values. This delicate trade-off makes it challenging and interesting to mathematically analyze and design networks that deliver high SIR values.

Researchers usually assume that the SIR is random. A quantity of interest is the tail distribution of the SIR, namely

$$
\mathbb{P}[\text{SIR}(x,X_i)>\tau ] := \frac{P_i(x)}{\sum\limits_{j\neq i} P_j(x)} \,,
$$

where \(\tau>0\) is some parameter, sometimes called the SIR threshold. For a given value of \(\tau\), the probability \(\mathbb{P}[\text{SIR}(x,X_i)>\tau]\) is sometimes called the coverage probability, which is simply the probability that a signal coming from \(X_i\) can be received successfully at location \(x\).

Mathematical models

Propagation

Researchers usually attempt to represent the received power of the signal \(P_i(x)\) with a propagation model. This mathematical model consists of a random and a deterministic component taking the general form
$$
P_i(x)=F_i\ell(|X_i-x|) ,
$$
where \(F_i\) is a non-negative random variable and \(\ell(r)\) is a non-negative function in \(r \geq 0\).

Path loss

The function \(\ell(r)\) is called the path loss function, and common choices include \(\ell(r)=(\kappa r)^{-\beta}\) and \(\ell(r)=\kappa e^{-\beta r}\), where \(\beta>0\) and \(\kappa>0\) are model constants, which need to be fitted to (or estimated with) real world data.

Researchers generally assume that the so-called path loss function \(\ell(r)\) is decreasing in \(r\), but actual path loss (that is, the change in signal strength over a path travelled) typically increases with distance \(r\). Researchers originally assumed path loss functions to be increasing, not decreasing, giving the alternative (but equivalent) propagation model
$$
P_i(x)= F_i/\ell(|X_i-x|).
$$

But nowadays researchers assume that the function \(\ell(r)\) is decreasing in \(r\). (Although, based on personal experience, there is still some disagreement on the convention.)

Fading and shadowing

With the random variable \(F_i\), researchers seek to represent signal phenomena such as multi-path fading and shadowing (also called shadow fading), caused by the signal interacting with the physical environment such as buildings. These variables are often called fading or shadowing variables, depending on what physical phenomena they are representing.

Typical distributions for fading variables include the exponential and gamma distributions, while the log-normal distribution is usually used for shadowing. The entire collection of fading or shadowing variables is nearly always assumed to be independent and identically distributed (iid), but very occasionally random fields are used to include a degree of statistical dependence between variables.

Transmitters locations

In general, we assume the transmitters locations \(X_1,\dots,X_n\) are on the plane \(\mathbb{R}^2\). To model interference, researchers initially proposed non-random models, but they were considered inaccurate and intractable. Now researchers typically use random point processes or, more precisely, the realizations of random point processes for the transmitter locations.

Not surprisingly, the first natural choice is the Poisson point process. Other point processes have been used such as Matérn and Thomas cluster point processes, and Matérn hard-core point processes, as well as determinantal point processes, which I’ll discuss in another post.

Some history

Early random models of wireless networks go back to the 60s and 70s, but these were based simply on geometry: meaning a transmitter could communicate successfully to a receiver if they were closer than some fixed distance. Edgar Gilbert created the field of continuum percolation with this significant paper:

  • 1961, Gilbert, Random plane networks.

Interest in random geometrical models of wireless networks continued into the 70s and 80s. But there was no SIR in these models.

Motivated by understanding SIR, researchers in the late 1990s and early 2000s started tackling SIR problems by using a random model based on techniques from stochastic geometry and point processes. Early papers include:

  • 1997, Baccelli, Klein, Lebourges ,and Zuyev, Stochastic geometry and architecture of communication networks;
  • 2003, Baccelli and Błaszczyszyn , On a coverage process ranging from the Boolean model to the Poisson Voronoi tessellation, with applications to wireless communications;
  • 2006, Baccelli, Mühlethaler, and Błaszczyszyn, An Aloha protocol for multihop mobile wireless networks.

But they didn’t know that some of their results had already been discovered independently by researchers working on wireless networks in the early 1990s. These papers include:

  • 1994, Pupolin and Zorzi, Outage probability in multiple access packet radio networks in the presence of fading;
  • 1990, Sousa and Silvester, Optimum transmission ranges in a direct-sequence spread-spectrum multihop packet radio network.

The early work focused more on small-scale networks like wireless ad hoc networks. Then the focus shifted dramatically to mobile or cellular phone networks with the publication of the paper:

  • 2011, Andrews, Baccelli, Ganti, A tractable approach to coverage and rate in cellular networks.

It’s can be said with confidence that this paper inspired much of the interest in using point processes to develop models of wireless networks. The work generally considers the SINR in the downlink channel for which the incoming signals originate from the phone base stations.

Further reading

A good starting point on this topic is the Wikipedia article Stochastic geometry models of wireless networks. This paper is also good:

  • 2009, Haenggi, Andrews, Baccelli, Dousse, Franceschetti, Stochastic Geometry and Random Graphs for the Analysis and Design of Wireless Networks.

Early books on the subject include the two-volume textbooks Stochastic Geometry and Wireless Networks by François Baccelli and Bartek Błaszczyszyn, where the first volume is on theory and the second volume is on applications.  Martin Haenggi wrote a very readable introductory book called Stochastic Geometry for Wireless networks.

Finally, Bartek Błaszczyszyn, Sayan Mukherjee, Martin Haenggi, and I wrote a short book on SINR models called Stochastic Geometry Analysis of Cellular Networks, which is written at a slightly more advanced level. The book put an emphasis on studying the point process formed from inverse signal strengths, we call the projection process.

Simulating Matérn hard-core point processes

If you wanted to create a point process with repulsion, a reasonable first attempt would be to build off a Poisson point process by removing points according to some rule to ensure that no two points were within a certain distance of each other. Using this natural idea, Bertril Matérn proposed a family of repulsive point processes called Matérn hard-core point processes.

More specifically, Matérn proposed several points processes, including two types of hard-core point processes now called Type I and Type II. (Matérn proposed a third type, called Type III, but it’s considerably harder to simulate on a computer, as detailed in this article.) These types of hard-core point processes are completely different to the Matérn cluster point process.

As I discussed in a previous post, the Poisson point process may not be adequate for representing point phenomena whose points exhibit large degrees of repulsion or clustering. I already covered the Matérn and Thomas cluster point processes, which show distinct clustering in their configurations. In this post, I’ll cover Matérn hard-core point processes. The Type I point processes is the easier of the two, so I’ll start with that one.

Overview

Simulating Matérn hard-core point processes requires first simulating a homogeneous Poisson point process with an intensity \(\lambda>0\) on some simulation window, such as a rectangle, which is the simulation window I will use here. I have already written about simulating the homogeneous Poisson point processes on a rectangle and a disk, so those posts are good starting points.

Given the Poisson point process, the points then need to be thinned in such a manner to ensure that for each point, there is no other point within some fixed \(r>0\) of the point. This distance \(r>0\) is the radius of the hard core of each point.

I have already covered the point process operation of thinning. But it’s important to note here that in this construction a dependent thinning is being applied. (If I just applied an independent thinning, then the resulting point process will be another Poisson point process with no repulsion between points.)

Edge effects

The main trick behind sampling this point process is that it’s possible for points inside the simulation window to be thinned due to their closeness to points that are located outside the simulation window. In other words, points outside the simulation window can cause points inside the window to be thinned. (I discussed a very similar issue in the posts on the Matérn and Thomas cluster point processes.)

To remove these edge effects, the underlying Poisson point process must be simulated on an extended version of the simulation window. The points are then thinned according to a dependent thinning, which is covered in the next section. Then only the retained points inside the simulation window are kept and the remaining points are ignored. Consequently, the underling Poisson points are simulated on an extended window, but we only see the final points inside the simulation window.

To create the extended simulation window, we add a strip of width \(r\) all around the simulation window. Why? Well, the distance \(r\) is the maximum distance from the simulation window that another point (outside the simulation window) can exist, while still causing points inside the simulation window to be thinned. This means it is impossible for a hypothetical point beyond this distance (outside the extended window) to cause a point inside the simulation window to be thinned.

Dependent thinning rules

Type I

For each point inside the simulation window, check if there are any other points (including those in the extended window) within distance \(r\) of the point. If no, then keep the point. If yes, then remove the point and the points that are within distance \(r\) of the point. The remaining points inside the simulation window form a Matérn Type I point process.

This is a relatively simple thinning rule, which only requires calculating all the inter-point distances. But it is also a very strong thinning rule, meaning that it removes many points. Depending on the Poisson point process intensity \(\lambda\) and core radius \(r\), it is quite possible that all the points are removed, resulting in an empty configuration.

Now we examine the case when the thinning rule is not as strong.

Type II

To create Matérn Type II point process, we assign an independent uniform random variable to each point of the underlying Poisson point process defined on the extended window. In point process terminology, these random variables are called marks, resulting in a marked point process. In the the context of the Matérn Type II point process, these random random marks are usually called ages.

Then for each point in the simulation window, we consider all the points within distance \(r\) of the point. If this point is the youngest (or, equivalently, the oldest) point, then the point is kept. In other words, the point is only kept if its random mark is smaller (or larger) than the random marks of all the other points within distance \(r\) of the point. The remaining points inside the simulation window form a Matérn Type II point process.

Intensity expressions

Using point process and probability theory, one can derive mathematical expressions for the intensities (that is, the average density of points per unit area). These closed-form expressions can then be used to check that the correct number of points are being generated on average over many simulations.

Type I

The intensity of the Type I point process is given by

\[\mu_1=\lambda e^{-\lambda \pi r^2},\]

where \(\lambda \pi r^2\) is simply the area of the core.

Type II

The intensity of the Type II point process is given by

\[\mu_2=\frac{1}{\pi r^2}(1-e^{-\lambda \pi r^2}),\]

which can be written with the intensity of the the Type I point process as

\[\mu_2=\frac{1}{\pi r^2}(1-\frac{\mu_1}{\lambda}).\]

Code

I wrote the sampling code in MATLAB and Python, which are, as usual, very similar to each other. The code, which is is located here, simulates both Type I and II Matérn points processes. It also compares the empirical intensity to the the values given by the mathematical expressions in the previous section.

MATLAB

The MATLAB code is here.

Python

The Python code is here.

Results

I have plotted single realizations of the Matern Type I and II point processes, as well as the underlying Poisson point process in the same window.

MATLAB

Python

Further reading

Matérn hard-core point processes are covered in standard books on the related fields of spatial statistics, point processes and stochastic geometry, such as the following: Spatial Point Patterns: Methodology and Applications with R by Baddeley, Rubak and Turner, on page 140; Statistical Analysis and Modelling of Spatial Point Patterns Statistics by Illian, Penttinen, Stoyan, amd Stoyan, Section 6.5.2, starting on page 388; and; Stochastic Geometry and its Applications by Chiu, Stoyan, Kendall and Mecke, Section 5.4, starting on page 176. The first two books are particularly good for beginners.

The aforementioned book Spatial Point Patterns: Methodology and Applications with R is written by spatial statistics experts Baddeley, Rubak and Turner. It covers the spatial statistics (and point process simulation) R-package spatstat., which has the functions rMaternI and rMaternII for simulating the two point processes respectively.

Simulating Poisson random variables – Survey of methods

In the previous post, I discussed how to sample or generate Poisson random variables or, more correctly, variates. I detailed a direct method that uses the fact that a Poisson stochastic process, which is directly related to a Poisson point process, has inter-arrival times that form independent and identically distributed exponential variables.

This direct method in turn can be easily reformulated so it only uses (standard) uniform variables to generate Poisson random variables. It is an easy and intuitive sampling method, explaining why it is often used. Using it, I wrote Poisson simulation code in MATLAB, Python, C and C#, which can be found here. (For another post, I later implemented the same Poisson sampling method in Fortran, which is located here.)

As elegant and exact as this simulation method is, it unfortunately decreases in speed as the Poisson parameter \(\lambda\) increases. In a tutorial published in 1983, Brian D. Ripely, a major figure in spatial statistics, says this about the direct method:

This is simple, but has expected time proportional to \(\lambda\). Some of its competitors use rejection methods with the envelope distribution that of the integer part of a continuous random variable, such as logistic, Laplace and normal mixed with exponential distributions.

We recall that acceptance-rejection or rejections methods involve simulating a random object, such as a random variable, by first simulating another random object of the same type that is easier to simulate.  The simulation method then accepts or rejects these random objects based on a certain ratio. The distribution of the simpler random object that is first simulated is called the envelope distribution. Such rejection methods are one way to simulate Poisson variables.

In short, when simulating Poisson variables, the appropriate simulation algorithm should be chosen based on the Poisson parameter. Consequently, the code of most computer functions for generating Poisson variables will have an if-statement, using the direct method for small parameter values and another method for large parameter values. We now consider the other methods.

Different methods

Over the years there have been different methods proposed for producing Poisson random variates. In the book Non-uniform random variate generation, Luc Devroye groups the different methods into five categories coupled with his views. I’ll briefly describe these methods:

  1. Direct methods based on the homogeneous Poisson stochastic process having exponential inter-arrival times. These methods are simple, but the expected time is proportional to the Poisson parameter \(\lambda\).
  2. Inversion methods that search through a table of cumulative Poisson probabilities. Examples include the papers by Fishman (1976) and Atkinson (1979)*.
  3. Methods that use the recursive properties of the Poisson distribution. The paper by Ahrens and Dieter (1974) uses this approach, and its expected time of completion is proportional to \(\log\lambda\).
  4. Acceptance-rejection (or rejection) methods that give relatively fast but simple algorithms. Such methods are proposed in the papers by Atkinson (1979)*, Ahrens and Dieter (1980) and Devroye (1981) or the technical report by Schmeiser and Kachitvichyanukul (1981).
  5. Acceptance-complement methods that uses a normal distribution as the starting distribution, such as the paper by Ahrens and Dieter (1982). This method is fast, but the code is rather long.

*Atkinson had (at least) two papers on generating Poisson variates published in 1979, but I believe Devroye is referring to the first paper, because in the second paper Atkinson compares methods proposed by others.

For the paper titles, see the Further reading section below.

Methods implemented

In this section, I’ll state which proposed methods are used in various programming languages and numerical methods. I won’t go into the details how the methods work, as I’ll just cite the papers instead.

MATLAB

For small \(\lambda\) values, the MATLAB function poissrnd uses the  direct method (based on inter-arrival times) with a while-loop.

For \(\lambda\) values greater than fifteen, I believe that the MATLAB function poissrnd uses Algorithm PG from the 1974 paper by Ahrens and Dieter. But to come to this conclusion, I had to do some investigating. You can skip to the next section if you’re not interested, but now I’ll explain my reasoning.

The MATLAB documentation says it uses a method proposed by Ahrens and Dieter, but these two researchers developed a number of methods for generating Poisson variables. The MATLAB code cites Volume 2 of the classic series by Knuth, who says the method is due to Ahrens and Dieter, but he doesn’t give an exact citation in that section of the book. Confusingly, Knuth cites in his book a couple papers by Ahrens and Dieter for generating different random variates. (Knuth later cites a seemingly relevant 1980 paper by Ahrens and Dieter, but that details another method.)

Both the MATLAB code and Knuth cite the book by Devroye. In his book (Exercise 3.5.2), Devroye discusses one method, among others, from a 1974 paper by Ahrens and Dieter. Another hint is given by examining the code of the MATLAB function poissrnd, which reveals that it uses the function randg to generate gamma variables. In the Ahrens and Dieter 1974 paper, their Algorithm PG (for producing Poisson variates) uses gamma random variables, and it’s suggested to use a parameter value of \(7/8\). This is the same parameter used in the MATLAB code and mentioned by Knuth, confirming that this is the right paper by Ahrens and Dieter.

In summary, for large \(\lambda\) the function MATLAB uses Algorithm PG from the 1974 paper by Ahrens and Dieter, whereas for small values it uses the direct method, which they refer to as the multiplication method.

R

In R, the function rpois use an algorithm outlined in the 1982 paper by Ahrens and Dieter. You can view the R source code here. The two cases for \(\lambda\) (or \(\mu\) in the paper) depend on whether \(\lambda\) is greater than ten or not. For small \(\lambda\), the R function rpois does not use the method based on inter-arrival times, but rather an inversion method based on a table of (cumulative) probabilities given by the Poisson probability distribution.

Python (NumPy)

In NumPy, the function numpy.random.poisson generates Poisson variates. The source code for the NumPy library is here, but for the Poisson function the underlying code is actually written in C; see the distributions.c file located here. For small Poisson parameter \(\lambda\), the code uses the direct method; see the function random_poisson_mult in the code.

For Poisson parameter \(\lambda \geq 10\), the comments in the code reveal that it uses a method from a 1993 paper by Hörmann; see Algorithm PTRS on page 43 of the paper. This is a transformation method, which for NumPy is implemented in the C code as the function random_poisson_ptrs. The method, which Hörmann calls the transformed rejection with squeeze, combines inversion and rejection methods.

Octave

Octave is intended to be a GNU clone of MATLAB, so you would suspect it uses the same methods as MATLAB for generating Poisson random variates. But the Octave function poissrnd uses different methods. The code reveals it generates the Poisson variates with a function called prand. It considers different cases depending on the value of the Poisson parameter \(\lambda\) as well as whether a single variable (that is, a scalar) or vector or matrix of Poisson variates are being generated.

In total, the Octave function prand uses five different methods. For two of the methods, the documentation cites methods from the classic book Numerical Recipes in C (the 1992 edition); see next section. To generate a single Poisson variate with Poisson parameter \(\lambda \leq 12\), the Octave function prand uses the direct method based on inter-arrival times.

Numerical Recipes (Fortran, C and C++)

The book Numerical Recipes is a classic by Press, Teukolsky, Vetterling and Flannery on numerical methods. The books comes in different editions reflecting different publication years and computer languages. (In the first two editions of the book, the authors implemented the algorithms respectively in Fortran and C.)

For generating Poisson variates, the book contents seems to have not changed over the editions that I looked at, which covered the programming languages Fortran (77 and 90), C, and C++. The authors cover Poisson generation in Section 7.3 in the Fortran and C editions. In the third edition of Numerical Recipes, they implement their methods in C++ in Section 7.3.12.

For small values of Poisson parameter \(\lambda\), Numerical Recipes uses the direct method. For \(\lambda >12\) values, an acceptance-rejection method is used, which relies upon finding a continuous version of the discrete Poisson probability distribution.

GSL Library (C)

In the GSL library, one can use the function gsl_ran_poisson, which uses the the direct method of exponential times. The code, which can be viewed here, cites simply Knuth (presumably the second volume).

NAG Library (C)

Although I didn’t see the code, it appears that the function nag_rand_poisson (g05tjc ) in the NAG library also uses the direct method, based on the material in the second volume of series by Knuth. But in a 1979 paper Atkinson says that the NAG library uses a method from the 1974 paper by Ahrens and Dieter.

MKL library (C)

In the MKL C library written by Intel, there seems to be three methods in use for generating Poisson variates.

The first function is called VSL_RNG_METHOD_POISSON_PTPE, which does the following for a Poisson distribution with parameter \(\Lambda\):

If Λ ≥ 27, random numbers are generated by PTPE method. Otherwise, a combination of inverse transformation and table lookup methods is used. The PTPE method is a variation of the acceptance/rejection method that uses linear (on the fraction close to the distribution mode) and exponential (at the distribution tails) functions as majorizing functions. To avoid time-consuming acceptance/rejection checks, areas with zero probability of rejection are introduced and a squeezing technique is applied.

This function uses the so-called PTPE method, which is outlined in a 1981 technical report by Schmeiser and Kachitvichyanukul.

The second function is called VSL_RNG_METHOD_POISSON_POISNORM, which does the following :

If Λ < 1, the random numbers are generated by combination of inverse transformation and table lookup methods. Otherwise, they are produced through transformation of the normally distributed random numbers.

The third function is called VSL_RNG_METHOD_POISSONV_POISNORM, which does the following:

If Λ < 0.0625, the random numbers are generated by inverse transformation method. Otherwise, they are produced through transformation of normally distributed random numbers.

cuRAND (C)

Finally, there is the  CUDA Random Number Generation library (cuRAND) developed by Nvidia for their (now ubiquitous) graphical processing units (GPUs).   This C/C++ library has a function for generating Poisson variates. To see the C code, copies of it can be found in various GitHub repositories, such as this one. The cuRAND function curand_poisson uses the direct function for Poisson parameter values  less than 64. For parameters values greater than 4000, it uses a normal approximation (rounded to the nearest integer).

For other values, the function curand_poisson uses a rejection method based on an approximation of the incomplete gamma function; see the function curand_poisson_gammainc. The book by Fishman is cited; see Section 8.16.

Boost library Random (C++)

The Boost library Random uses the PTRD algorithm proposed in the 1993 paper by Hörmann to generate Poisson variates; see Algorithm PTRD on page 42 of the paper.  In the same paper appears the PTRS method, which is used by Python (NumPy) (though implemented in C), as mentioned above.

Further reading

Books

For various Poisson simulation methods, see the stochastic simulation books:

The book by Gentle (Section 5.2.8) also briefly covers Poisson variables.

Of course, it’s a good idea to look at the citations that the different functions use.

Articles

Here is a list of the papers I mentioned in this post:

  • 1974, Ahrens and Dieter, Computer methods for sampling from gamma, beta, poisson and bionomial distributions;
  • 1976, Fishman, Sampling from the Poisson distribution on a computer;
  • 1979, Atkinson, The computer generation of Poisson random variables;
  • 1979, Atkinson, Recent developments in the computer generation of Poisson random variables;
  • 1980, Ahrens and Dieter, Sampling from binomial and Poisson distributions: a method with bounded computation times;
  • 1980, Devroye, The Computer Generation of Poisson Random Variables;
  • 1981, Schmeiser and Kachitvichyanukul, Poisson Random Variate Generation;
  • 1982, Ahrens and Dieter, Computer generation of Poisson deviates from modified normal distributions;
  • 1983, Ripley, Computer Generation of Random Variables: A Tutorial;
  • 1993, Hörmann, The transformed rejection method for generating Poisson random variable.

Simulating Poisson random variables – Direct method

If you were to write from scratch a program that simulates a homogeneous Poisson point process, the trickiest part would be the random number of points, which requires simulating a Poisson random variable. In previous posts on simulating this point process, such as this one and this one, I’ve simply used the inbuilt functions for simulating (or generating) Poisson random variables (or variates).1In the literature, researchers describe methods for generating random deviates or variates. But, in my informal way, I will often say simulate random variables or generate random variates, somewhat interchangeably.

But how would one create such a Poisson function using just a standard uniform random variate generator? In this post I present my own Poisson simulation code in MATLAB, Python, C and C#, which can be found here.

The method being used depends on the value of the Poisson parameter, denoted here by \(\lambda\), which is the mean (as well as the variance) of a random variable with a Poisson distribution. If this parameter value is small, then a direct simulation method can be used to generate Poisson random variates. In practice a small Poisson parameter is a number less than some number between 10 to 30.

For large \(\lambda\) values, other methods are generally used, such as rejection or (highly accurate) approximation methods. In the book Non-uniform random variate generation, the author Luc Devroye groups the methods into five categories (Section X.3.2), which I briefly describe in the next post. The first of those categories covers the method that I mentioned above. I will cover that method in this post, presenting some Poisson sampling code in C and C#. (I will also present some code in MATLAB, but you would never use it instead of the the inbuilt function poissrnd.)

In the next post, I’ll describe other types of Poisson simulation methods, and I’ll detail which simulation methods various programming libraries use.

Warning: My online webpage editor tends to mangle symbols like < and >, so it’s best not to copy my code straight from the website, unless you check and edit it.

Direct method

An elegant and natural method for simulating Poisson variates is to a result based on the homogeneous Poisson stochastic process. The points in time when a given homogeneous Poisson stochastic process increases forms a Poisson point process on the real line. 2Confusingly, the term Poisson process is often used to refer to both the stochastic process and the point process, but there is a slight difference.

Using exponential random variables

Here’s the algorithm for sampling Poisson variables with exponential random variables, which I’ll explain.

Sample Poisson random variable \(N\) with parameter (mean) \(\lambda\) using exponential random variables
    1. Set count variable \(N=0\) and initial sum variable \(S=0\);
    2. While \(S<1\):
      1. Sample uniform random variable \(U\sim U(0,1)\);
      2. Calculate \(E= -\log(U)/\lambda \) ;
      3. Update count and sum variables by setting \(N\rightarrow N+1\) and \(S\rightarrow S+E\);
    3. Return N;

The point in time when the Poisson stochastic process increases are called arrival times or occurrence times. In classic random models they represent the arrivals or occurrences of something, such as phone calls over time. The differences between consecutive times are called inter-arrival times or inter-occurrence times. The inter-arrival times of a homogeneous Poisson process form independent exponential random variables, a result known as the Interval Theorem.

Using this connection to the Poisson stochastic process, we can generate exponential variables \(E_1\), \(E_2, \dots \), and add them up. The smallest number of exponential variables for the resulting sum to exceeds one will give a Poisson random variable. That is, if we define \(N\) to be the smallest \(n\) such that
$$ \sum_{k=1}^{n+1} E_k > 1, $$
then \(N\) is a random variable distributed according to a Poisson distribution.

Generating exponential variates is easily done by using the inverse method. For a uniform random variable \(U\) on the unit interval \((0,1)\), the transformation \(E= -\log(U)/\lambda \) gives an exponential random variable with mean \(1/\lambda\).

But we can skip generating exponential random variates.

Using uniform random variables

Here’s the algorithm for sampling Poisson variables with uniform random variables.

Sample Poisson random variable \(N\) with parameter (mean) \(\lambda\) using uniform random variables
    1. Set count variable \(N=0\) and initial product variable \(P=1\);
    2. While \(P>e^{-\lambda}\):
      1. Sample uniform random variable \(U\sim U(0,1)\);
      2. Update count and product variables by setting \(N\rightarrow N+1\) and \(P\rightarrow P\times U\);
    3. Return N;

To reduce computations, the direct method using exponential random variables is often reformulated as products of uniform random variables. We can do this, due to logarithmic identities, and work with products of uniform variables instead of sums of exponential random variables.

Then, by using standard uniform random variables \(U_1, U_2,\dots\), we define \(N\) to be the smallest \(n\) such that
$$ \prod_{k=1}^{n+1} U_k < e^{-\lambda}. $$

These two different formulations of the same method are captured by Lemma 3.2 and Lemma 3.3 in Chapter 10 of Devroye’s book.

Example in MATLAB

In MATLAB, we can implement this method with the first formulation in a function with a simple while-loop:

function N=funPoissonLoop(lambda)
T=0; %initialize sum of exponential variables as zero
n=-1;%initialize counting variable as negative one

while (T &lt;1)
E=-(1/lambda)*log(rand(1));%generate exponential random variable
T=T+E; %update sum of exponential variables
n=n+1; %update number of exponential variables
end
N=n;
end

But, as I said before, don’t use this code instead of the inbuilt function poissrnd.

If you want to be a bit more tricky, you could achieve the same result by using recursion:

function N=funPoissonRecursive(lambda)
T=0; %initialize sum of exponential variables as zero
n=-1; %initialize counting variable as negative one

%run (recursive) exponential function step function
[~,N]=funStepExp(lambda,T,n);

function [T,N]=funStepExp(nu,S,m)
if (S &lt; 1)
%run if sum of exponential variables is not high enough

%generate exponential random variable
E=(-log(rand(1)))/nu;
S=S+E; %update sum of exponential variables
m=m+1; %update nunber of exponential variables

%recursively call function again
[T,N]=funStepExp(nu,S,m);
else
T=S;
N=m;
end
end
end

Note how the code recursively calls the function funStepExp, which generates an exponential variable each time.

In the Code section below I describe my code in C and C#, using the second formulation.

Origins

Some people attribute the direct method, based on inter-arrival times, to (or, at least, cite) Donald Knuth, who details it in the second volume of his classic series of books, but I doubt that the great Knuth was the first to have this idea. For example, a quick search on Google Scholar found a paper  by K. D. Tocher on computers and random sampling, where Tocher proposes the direct method in 1954, some years before Knuth started publishing his classic series.

The direct method for Poisson sampling relies upon the Interval theorem. The Poisson point process expert Günter Last studied the origins of this fundamental result. He presented its history in a recent book authored by him and Matthew Penrose; see Chapter 7 and its corresponding historical footnotes in Section C of the appendix. (A free version of the book can be found here. ) People connected to the result include Robert Ellis and William Feller who respectively lived in the 19th and 20th centuries.

Other methods

The direct method perfectly generates Poisson random variables (or I should say Poisson random variates). But it can be too slow for large values of the Poisson parameter (that, is the mean) \(\lambda\). This has motivated researchers to develop other methods, which I will mention in the next post.

Code

I wrote some code that simulates Poisson random variables by employing the direct method based on exponential inter-arrival times. As always, all my the code is online, with the code from this post being located here.

I have implemented the second formulation (using just uniform variables) in the C and C# languages. In the code, I have used a while-loop to implement the method. But I could have also used a recursion method, as I did in the MATLAB example above, which I have also done in Python (with NumPy).

For an empirical test, the code also calculates the mean and variance of a collection of Poisson variables. For a large enough number of variables, the sample mean and the variance will closely agree with each other, converging to the same value.

C

Warning: My C code uses rand(), the standard pseudo-random number function in C, which is known for failing certain tests of randomness. The function is adequate for regular simulation work. But it gives poor results for large number of simulations. Replace this function with another pseudo-random number generator.

The code for generating a single Poisson variate is fairly straightforward in C. Here’s a sample of the code with just the Poisson function:

//Poisson function -- returns a single Poisson random variable
int funPoissonSingle(double lambda)
{
double exp_lambda = exp(-lambda); //constant for terminating loop
double randUni; //uniform variable
double prodUni; //product of uniform variables
int randPoisson; //Poisson variable

//initialize variables
randPoisson = -1;
prodUni = 1;
do
{
randUni = funUniformSingle(); //generate uniform variable
prodUni = prodUni * randUni; //update product
randPoisson++; //increase Poisson variable

} while (prodUni &gt; exp_lambda);
return randPoisson;
}

For generating multiple variates, the code becomes more complicated, as one needs to use pointers, due to the memory capabilities of C. Again, the function uses the pseudo-random number generator in C.

C#

The code for generating a single Poisson variate is also straightforward in C#. Here’s the function in C#:

//Poisson function -- returns a single Poisson random variable
public int funPoissonSingle (double lambda) {
double exp_lambda = Math.Exp (-lambda); //constant for terminating loop
double randUni; //uniform variable
double prodUni; //product of uniform variables
int randPoisson; //Poisson variable

//initialize variables
randPoisson = -1;
prodUni = 1;
do {
randUni = funUniformSingle (); //generate uniform variable
prodUni = prodUni * randUni; //update product
randPoisson++; // increase Poisson variable

} while (prodUni &gt; exp_lambda);

return randPoisson;
}

Generalizing the code so it generates multiple variates just requires a little change, compared to C, as the C# language is a much more modern language.

Fortran

After this original post, I later wrote a post about implementing the same Poisson algorithm in Fortran. My Fortran code is very similar to the code that I wrote in C and C#. You should be able to run it on this website or similar ones that can compile Fortran (95) code.

Further reading

For various Poisson simulation methods, see the stochastic simulation books by Devroye (Section X.3) or Fishman (Section 8.16). There’s a free online version of Devroye’s book here. The book by Gentle (Section 5.2.8) also briefly covers Poisson variables.

In this post on generating Poisson variates, John D. Cook briefly discusses the direct method for small \(\lambda\) values and a rejection method from a 1979 paper by Atkinson, which I will mention in the next post. He presents his C# sharp code in this post.

Simulating Poisson point processes faster

As an experiment, I tried to write code for simulating many realizations of a homogeneous Poisson point process in a very fast fashion. My idea was to simulate all the realizations in two short steps.

In reality, the findings of this experiment and the contents of this post have little practical value, as computers are so fast at generating Poisson point processes. Still, it was an interesting task, which taught me a couple of things. And I did produce faster code.

MATLAB

I first tried this experiment in MATLAB.

Vectorization

In languages like MATLAB, the trick for speed is to use vectorization, which means applying a single operation to an entire vector or matrix (or array) without using an explicit for-loop. Over the years, the people behind MATLAB has advised to use vectorization instead of for-loops, as for-loops considerably slowed down MATLAB code. (But, as as time goes by, it seems using for-loops in MATLAB doesn’t slow the code down as much as it used to.)

Simulating Poisson point processes is particularly amenable to vectorization, due to the independent nature of the point process. One can simulate the number of points in each realization for all realizations in one step. Then all the points across all realizations can also be positioned in one step. In the two-dimensional case, this results in two one-dimensional arrays (or vectors, in MATLAB parlance) for the \(x\) and \(y\) coordinates.  (Of course, in my code, I could have used just one two-dimensional array/vector  for the coordinates of the points, but I didn’t.)

After generating the points, the coordinates of the points need to be grouped into the different realizations and stored in appropriate data structures.

Data structures

The problem with storing point processes is that usually each realization has a different number of points, so more sophisticated data structures than regular arrays are needed. For MATLAB, each point process realization can be stored in a data object called a cell array.  For a structure array, it’s possible for each element (that is, structure) to be a different size, making them ideal for storing objects like point processes with randomly varying sizes.

In the case of two-dimensional point processes, two cell arrays  can be used to store the \(x\) and \(y\) coordinates of all the point process realizations. After randomly positioning all the points, they can be grouped into a cell array, where each cell array element represents a realization of the Poisson point process, by using the inbuilt function MATLAB mat2cell, which converts a matrix (or array) into a cell array.

Alternatively, we could use another MATLAB data object called a structure array. In MATLAB structures have fields, which can be, for example for a point process can be the locations of the points. Given cell arrays of equal size, we can convert them into a single structure array by using the inbuilt MATLAB function struct.

Python

After successfully simulating Poisson point processes in MATLAB, I tried it in Python with the NumPy library.

Vectorization

I basically replicated what I did in MATLB using Python by positioning all the points in a single step. This gives two one-dimensional NumPy arrays for the \(x\) and \(y\) coordinates of all the point process realizations. (Again, I could have instead stored the coordinates as a single two-dimensional array, but I didn’t.)

Perhaps surprisingly, the vectorization step doesn’t speed things up much in Python with the NumPy library. This may be due to the fact that the underlying code is actually written in the C language.  That motivated me to see what methods have been implemented for simulating Poisson variables, which is the topic of the next couple posts.

Data structures

In Python, the data structure  known as a list is the natural choice. Similarly to cell arrays in MATLAB, two lists can be used for the \(x\) and \(y\) coordinates of all the points. Instead of MATLAB’s function mat2cell, I used the NumPy function numpy.split to create two lists from the two NumPy arrays containing the point coordinates.

Python does not seem to have an immediate equivalent to structure arrays in MATLAB. But in Python one can define a new data structure or class with fields, like a structure. Then one can create a list of those data structures with fields, which are called attribute references in Python.

Code

The code in MATLAB and Python can be found here. For a comparison, I also generated the same number of point process realizations (without using vectorization) by using a trusty for-loop. The code compares the times of taken for implemented the two different approaches, labelled internally as Method A and Method B. There is a some time difference in the MATLAB code, but not much of a difference in the Python case.

I have commented out the sections that create data structures (with fields or attribute references) for storing all the point process realizations, but those sections should also work when uncommented.