Checking Poisson point process simulations

In previous posts I described how to simulate homogeneous Poisson point processes on a rectangle, disk and triangle. Then I covered how to randomly thin a point process in a spatially dependent manner. Building off these posts, I wrote in my last post how to simulate an inhomogeneous or nonhomogeneous Poisson point process. Now I’ll describe how to verify that the simulation code correctly simulates the desired Poisson point process.

Although this post is focused on the Poisson point process, I stress that parts of the material hold for all point processes. Also, not surprisingly, some of the material and the code overlaps with that presented in the post on the inhomogeneous point process.

Basics

Any Poisson point process is defined with a measure called the intensity measure or mean measure, which I’ll denote by \(\Lambda\). For practical purposes, I assume that the intensity measure \(\Lambda\) has a derivative \(\lambda(x,y)\), where \(x\) and \(y\) denote the Cartesian coordinates. The function \(\lambda(x,y)\) is often called the intensity function or just intensity. I assume this function is bounded, so \(\lambda(x,y)<\infty\) for all points in a simulation window \(W\). Finally, I assume that the simulation window \(W\) is a rectangle.

Several times before I have mentioned that simulating a Poisson point process requires simulating two random components: the number of points and the locations of points. Working backwards, to check a Poisson simulation, we must run the Poisson simulation a large number of times (say \(10^3\) or \(10^4\)), and collect the statistics on these two properties. We’ll start by examining the easiest of the two random components.

Number of points

For any Poisson point process, the number of points is a Poisson random variable with a parameter (that is, a mean) \(\Lambda(W)\). Under our previous assumptions, this is given by the surface integral

$$\Lambda(W)=\int_W \lambda(x,y)dxdy.$$

Presumably we can evaluate such an integral analytically or numerically in order to simulate the Poisson point process. To check that we correctly simulate the random the number of points, we just need to simulate the point process a large number of times, and compare the statistics to those given by the analytic expressions.

Moments

The definition of the intensity measure of a point process is a measure that gives the average or expected number of points in some region. As the number of simulations increases, the (sample) average number of points will converge to the intensity measure \(\Lambda(W)\). I should stress that this is a test for the intensity measure, a type of first moment, which will work for the intensity measure of any point process.

For Poisson point processes, there is another moment test that can be done. It can be shown mathematically that the variance of the number of points will also converge to the intensity measure \(\Lambda(W)\), giving a second empirical test based on second moments. There is no point process theory here, as this moment result is simply due to the number of points being distributed according to a Poisson distribution. The second moment is very good for checking Poissonness, forming the basis for statistical tests. If this and the first moment test hold, then there’s a very strong chance the number of points is a Poisson variable.

Empirical distribution

Beyond the first two moments, an even more thorough test is to estimate an empirical distribution for the number of points. That is, we perform a histogram on the number of points, and then we normalize it, so the total mass sums to one, producing an empirical probability distribution for the number of points.

The results should closely match the probability mass function of a Poisson distribution with parameter \(\Lambda(W)\). This is simply

$$ P(N=n)= \frac{[\Lambda(W)]^n}{n!} e^{-\Lambda(W)}, $$

where \(N\) is the random number of points in the window \(W\), which has the area \(|W|\). If the empirical distribution is close to results given by the above expression, then we can confidently say that the number of points is a Poisson random variable with the right parameter or mean.

Locations of points

To check the positioning of points, we can empirically estimate the intensity function \(\lambda(x,y)\). A simple way to do this is to perform a two-dimensional histogram on the point locations. This is very similar to the one-dimensional histogram, which I suggested to do for testing the number of points. It just involves counting the number of points that fall into two-dimensional non-overlapping subsets called bins.

To estimate the intensity function, each bin count needs to be rescaled by the area of the bin, giving a density value for each bin. This empirical estimate of the intensity function should resemble the true intensity function \(\lambda(x,y)\). For a visual comparison, we can use a surface plot to illustrate the two sets of results.

This procedure will work for estimating the intensity function of any point process, not just a Poisson one.

Advanced tests

In spatial statistics there are more advanced statistical tests for testing how Poisson a point pattern is. But these tests are arguably too complicated for checking a simple point process that is rather easy to simulate. Furthermore, researchers usually apply these tests to a small number of point patterns. In this setting, it is not possible to accurately obtain empirical distributions without further assumptions. But with simulations, we can generate many simulations and obtain good empirical distributions. In short, I would not use such tests for just checking that I have properly coded a Poisson simulation.

Results

I produced the results with ten thousand simulations, which gave good results and took only a few seconds to complete on a standard desktop computer. Clearly increasing the number of simulations increases the accuracy of the statistics, but it also increases the computation time.

For the results, I used the intensity function

$$\lambda(x,y)=\lambda_1(x,y)+\lambda_2(x,y),$$

where

$$\lambda_1(x,y)=80e^{-((x+0.5)^2+(y+0.5)^2)/s^2},$$

$$\lambda_2(x,y)=100e^{-((x-0.5)^2+(y-0.5)^2)/s^2},$$

and \(s>0\) is a scale parameter. We can see that this function has two maxima or peaks at \((-0.5,-0.5)\) and \((0.5,0.5)\).

MATLAB

Python

Further reading

I have not covered much new theoretical stuff in this post, so looking at the references in previous posts, such as this one, should help.

For two-dimensional histograms, I recommend going to the respective MATLAB and Python function websites. Here’s an example of a two-dimensional histogram implemented in Python.

Probably the most difficult part for me was performing the plotting in Python. I recommend these links:

https://plot.ly/python/3d-surface-plots/

https://chrisalbon.com/python/basics/set_the_color_of_a_matplotlib/

https://matplotlib.org/examples/color/colormaps_reference.html

Code

All code from my posts, as always, can be found on the my GitHub repository. The code for this post is located here.

The estimating the statistical moments is standard. Performing the histograms is also routine, but when normalizing, you have to choose the option that returns the empirical estimate of the probability density function (pdf).

Fortunately, scientific programming languages usually have functions for performing two-dimensional histograms. What is a bit tricky is how to normalize or rescale the bin counts. The histogram functions can, for example, divide by the number of simulations, the area of each bin, or both. In the end, I chose the pdf option in both MATLAB and Python to give an empirical estimate of the probability density function, and then multiplied it by the average number of points, which was calculated in the previous check. (Although, I could have done this in a single step in MATLAB, but not in Python, so I chose to do it in a couple of steps in both languages so the code matches more closely.)

MATLAB

I used the surf function to plot the intensity function and its estimate; see below for details on the histograms.

close all;

%Simulation window parameters
xMin=-1;xMax=1;
yMin=-1;yMax=1;
xDelta=xMax-xMin;yDelta=yMax-yMin; %rectangle dimensions
areaTotal=xDelta*yDelta; %area of rectangle

numbSim=10^4; %number of simulations

s=0.5; %scale parameter

%Point process parameters
fun_lambda=@(x,y)(100*exp(-((x).^2+(y).^2)/s^2));%intensity function

%%%START -- find maximum lambda -- START %%%
%For an intensity function lambda, given by function fun_lambda,
%finds the maximum of lambda in a rectangular region given by
%[xMin,xMax,yMin,yMax].
funNeg=@(x)(-fun_lambda(x(1),x(2))); %negative of lambda
%initial value(ie centre)
xy0=[(xMin+xMax)/2,(yMin+yMax)/2];%initial value(ie centre)
%Set up optimization step
options=optimoptions('fmincon','Display','off');
%Find largest lambda value
[~,lambdaNegMin]=fmincon(funNeg,xy0,[],[],[],[],...
[xMin,yMin],[xMax,yMax],'',options);
lambdaMax=-lambdaNegMin;
%%%END -- find maximum lambda -- END%%%

%define thinning probability function
fun_p=@(x,y)(fun_lambda(x,y)/lambdaMax);

%for collecting statistics -- set numbSim=1 for one simulation
numbPointsRetained=zeros(numbSim,1); %vector to record number of points
for ii=1:numbSim
%Simulate Poisson point process
numbPoints=poissrnd(areaTotal*lambdaMax);%Poisson number of points
xx=xDelta*(rand(numbPoints,1))+xMin;%x coordinates of Poisson points
yy=xDelta*(rand(numbPoints,1))+yMin;%y coordinates of Poisson points

%calculate spatially-dependent thinning probabilities
p=fun_p(xx,yy);

%Generate Bernoulli variables (ie coin flips) for thinning
booleRetained=rand(numbPoints,1)<p; %points to be retained %x/y locations of retained points xxRetained=xx(booleRetained); yyRetained=yy(booleRetained); %collect number of points simulated numbPointsRetained(ii)=length(xxRetained); end %Plotting plot(xxRetained,yyRetained,'bo'); %plot retained points xlabel('x');ylabel('y'); %run empirical test on number of points generated if numbSim&gt;=10
%total mean measure (average number of points)
LambdaNumerical=integral2(fun_lambda,xMin,xMax,yMin,yMax)
%Test: as numbSim increases, numbPointsMean converges to LambdaNumerical
numbPointsMean=mean(numbPointsRetained)
%Test: as numbSim increases, numbPointsVar converges to LambdaNumerical
numbPointsVar=var(numbPointsRetained)

end

For the histogram section, I used the histcounts and histcounts2 functions respectively to estimate the distribution of the number of points and the intensity function. I used the pdf option.

Number of points

histcounts(numbPointsRetained,binEdges,'Normalization','pdf');

Locations of points

histcounts2(xxVectorAll,yyVectorAll,numbBins,'Normalization','pdf');
Python

I used the Matplotlib library to plot the intensity function and its estimate; see below for details on the histograms.

import numpy as np; #NumPy package for arrays, random number generation, etc
import matplotlib.pyplot as plt #for plotting
from matplotlib import cm #for heatmap plotting
from mpl_toolkits import mplot3d #for 3-D plots
from scipy.optimize import minimize #for optimizing
from scipy import integrate #for integrating
from scipy.stats import poisson #for the Poisson probability mass function

plt.close("all"); #close all plots

#Simulation window parameters
xMin=-1;xMax=1;
yMin=-1;yMax=1;
xDelta=xMax-xMin;yDelta=yMax-yMin; #rectangle dimensions
areaTotal=xDelta*yDelta;

numbSim=10**4; #number of simulations
numbBins=30; #number of bins for histogram

#Point process parameters
s=0.5; #scale parameter

def fun_lambda(x,y):
#intensity function
lambdaValue=80*np.exp(-((x+0.5)**2+(y+0.5)**2)/s**2)+100*np.exp(-((x-0.5)**2+(y-0.5)**2)/s**2);
return lambdaValue;

###START -- find maximum lambda -- START ###
#For an intensity function lambda, given by function fun_lambda,
#finds the maximum of lambda in a rectangular region given by
#[xMin,xMax,yMin,yMax].
def fun_Neg(x):
return -fun_lambda(x[0],x[1]); #negative of lambda

xy0=[(xMin+xMax)/2,(yMin+yMax)/2];#initial value(ie centre)
#Find largest lambda value
resultsOpt=minimize(fun_Neg,xy0,bounds=((xMin, xMax), (yMin, yMax)));
lambdaNegMin=resultsOpt.fun; #retrieve minimum value found by minimize
lambdaMax=-lambdaNegMin;
###END -- find maximum lambda -- END ###

#define thinning probability function
def fun_p(x,y):
return fun_lambda(x,y)/lambdaMax;

#for collecting statistics -- set numbSim=1 for one simulation
numbPointsRetained=np.zeros(numbSim); #vector to record number of points
xxAll=[]; yyAll=[];

### START -- Simulation section -- START ###
for ii in range(numbSim):
#Simulate a Poisson point process
numbPoints = np.random.poisson(lambdaMax*areaTotal);#Poisson number of points
xx = xDelta*np.random.uniform(0,1,numbPoints)+xMin;#x coordinates of Poisson points
yy = yDelta*np.random.uniform(0,1,numbPoints)+yMin;#y coordinates of Poisson points

#calculate spatially-dependent thinning probabilities
p=fun_p(xx,yy);

#Generate Bernoulli variables (ie coin flips) for thinning
booleRetained=np.random.uniform(0,1,numbPoints)<p; #points to be thinned

#x/y locations of retained points
xxRetained=xx[booleRetained]; yyRetained=yy[booleRetained];
numbPointsRetained[ii]=xxRetained.size;
xxAll.extend(xxRetained); yyAll.extend(yyRetained);
### END -- Simulation section -- END ###

#Plotting a simulation
fig1 = plt.figure();
plt.scatter(xxRetained,yyRetained, edgecolor='b', facecolor='none');
plt.xlabel("x"); plt.ylabel("y");
plt.title('A single realization of a Poisson point process');
plt.show();

#run empirical test on number of points generated
###START -- Checking number of points -- START###
#total mean measure (average number of points)
LambdaNumerical=integrate.dblquad(fun_lambda,xMin,xMax,lambda x: yMin,lambda y: yMax)[0];
#Test: as numbSim increases, numbPointsMean converges to LambdaNumerical
numbPointsMean=np.mean(numbPointsRetained);
#Test: as numbSim increases, numbPointsVar converges to LambdaNumerical
numbPointsVar=np.var(numbPointsRetained);
binEdges=np.arange(numbPointsRetained.min(),(numbPointsRetained.max()+1))-0.5;
pdfEmp, binEdges=np.histogram(numbPointsRetained, bins=binEdges,density=True);

nValues=np.arange(numbPointsRetained.min(),numbPointsRetained.max());
#analytic solution of probability density
pdfExact=(poisson.pmf(nValues,LambdaNumerical));

#Plotting
fig2 = plt.figure();
plt.scatter(nValues,pdfExact, color='b', marker='s',facecolor='none',label='Exact');
plt.scatter(nValues,pdfEmp, color='r', marker='+',label='Empirical');
plt.xlabel("n"); plt.ylabel("P(N=n)");
plt.title('Distribution of the number of points');
plt.legend();
plt.show();
###END -- Checking number of points -- END###

###START -- Checking locations -- START###
#2-D Histogram section
p_Estimate, xxEdges, yyEdges = np.histogram2d(xxAll, yyAll,bins=numbBins,density=True);
lambda_Estimate=p_Estimate*numbPointsMean;

xxValues=(xxEdges[1:]+xxEdges[0:xxEdges.size-1])/2;
yyValues=(yyEdges[1:]+yyEdges[0:yyEdges.size-1])/2;
X, Y = np.meshgrid(xxValues,yyValues) #create x/y matrices for plotting

#analytic solution of probability density
lambda_Exact=fun_lambda(X,Y);

#Plot empirical estimate
fig3 = plt.figure();
plt.rc('text', usetex=True);
plt.rc('font', family='serif');
ax=plt.subplot(211,projection='3d');
surf = ax.plot_surface(X, Y,lambda_Estimate,cmap=plt.cm.plasma);
plt.xlabel("x"); plt.ylabel("y");
plt.title('Estimate of $\lambda(x)$');
plt.locator_params(axis='x', nbins=5);
plt.locator_params(axis='y', nbins=5);
plt.locator_params(axis='z', nbins=3);
#Plot exact expression
ax=plt.subplot(212,projection='3d');
surf = ax.plot_surface(X, Y,lambda_Exact,cmap=plt.cm.plasma);
plt.xlabel("x"); plt.ylabel("y");
plt.title('True $\lambda(x)$');
plt.locator_params(axis='x', nbins=5);
plt.locator_params(axis='y', nbins=5);
plt.locator_params(axis='z', nbins=3);
###END -- Checking locations -- END###

For the histogram section, I used the histogram and histogram2d functions respectively to estimate the distribution of the number of points and the intensity function. I used the pdf option. (The SciPy website recommends not using the the normed option.)

Number of points

np.histogram(numbPointsRetained, bins=binEdges,density=True);

Locations of points

np.histogram2d(xxArrayAll, yyArrayAll,bins=numbBins,density=True);

Some remarks regarding “On the Laplace Transform of the Aggregate Discounted Claims with Markovian arrivals”

Google Scholar has requested me to make available a freely available copy of this published comment my former PhD supervisor and I wrote a few years ago:

  • Keeler and Taylor, Some remarks regarding “On the Laplace Transform of the Aggregate Discounted Claims with Markovian arrivals”

OK, Google Scholar, here’s the manuscript that we submitted, which is basically the same as the published version.

The comment and the original paper cover an insurance model (for aggregate claims) that uses a Markov arrival process. Such stochastic processes use matrix theory. But there was a small error in the original paper, where commutativity had been assumed, which is clearly not the case in general for matrices. Despite this error, the incorrect solution gave surprisingly accurate answers, so we investigated why that was.

Nothing groundbreaking here by us. We were just curious.

Simulating an inhomogeneous Poisson point process

In previous posts I described how to simulate homogeneous Poisson point processes on a rectangle, disk and triangle. But here I will simulate an inhomogeneous or nonhomogeneous Poisson point process. (Both of these terms are used, where the latter is probably more popular, but I prefer the former.) For such a point process, the points are not uniformly located on the underlying mathematical space on which the Poisson process is defined. This means that certain regions will, on average, tend to have more (or less) points than other regions of the underlying space.

Basics

Any Poisson point process is defined with a non-negative measure called the intensity or mean measure. I make the standard assumption that the intensity measure \(\Lambda\) has a derivative \(\lambda(x,y)\). (I usually write a single \(x\) to denote a point on the plane, that is \(x\in \mathbb{R}^2\), but in this post I will write the \(x\) and \(y\) and coordinates separately.) The function \(\lambda(x,y)\) is often called the intensity function or just intensity, which I further assume is bounded, so \(\lambda(x,y)<\infty\) for all points in a simulation window \(W\). Finally, I assume that the simulation window \(W\) is a rectangle, but later I describe how to lift that assumption.

Number of points

To simulate a point process, the number of points and the point locations in the simulation window \(W\) are needed. For any Poisson point process, the number of points is a Poisson random variable with a parameter (that is, a mean) \(\Lambda(W)\), which under our previous assumptions is given by the integral

$$\Lambda(W)=\int_W \lambda(x,y)dxdy. $$

Assuming we can evaluate such an integral analytically or numerically, then the number of points is clearly not difficult to simulate.

Locations of points

The difficulty lies in randomly positioning the points. But a defining property of the Poisson point process is its independence, which allows us to treat each point completely independently. Positioning each point then comes down to suitably simulating two (or more) random variables for Poisson point processes in two (or higher) dimensions. Similarly, the standard methods used for simulating continuous random variables can be applied to simulating random point locations of a Poisson point process.

In theory, you can rescale the intensity function with the total measure of the simulation window, giving

$$f(x,y):=\frac{\lambda(x,y)}{\Lambda(W)}. $$

We can then interpret this rescaled intensity function \(f(x,y)\) as the joint probability density of two random variables \(X\) and \(Y\), because it integrates to one,

$$\int_W f(x,y)dxdy=1.$$

Clearly the method for simulating an inhomogeneous Poisson point process depends on the nature of the intensity function. For the inhomogeneous case, the random variables \(X\) and \(Y\) are, in general, not independent.

Transformation

To simulate an inhomogeneous Poisson point process, one method is to first simulate a homogeneous one, and then suitably transform the points according to deterministic function. For simple random variables, this transformation method is quick and easy to implement, if we can invert the probability distribution. For example, a uniform random variable \(U\) defined on the interval \((0,1)\) can be used to give an exponential random variable by applying the transformation \(h(u)= -(1/\lambda)\log(u)\), where \(\lambda>0\), meaning \(h(U)\) is an exponential random variable with parameter \(\lambda>0\) (or mean \(1/\lambda\)).

Similarly for Poisson point processes, the transformation approach is fairly straightforward in a one-dimensional setting, but generally doesn’t work easily in two (or higher) dimensions. The reason being that often we cannot simulate the random variables \(X\) and \(Y\) independently, which means, in practice, we need first to simulate one random variable, then the other.

It is a bit easier if we can re-write the rescaled intensity function or joint probability density \(f(x,y)\) as a product of single-variable functions \(f_X(x)\) and \(f_Y(y)\), meaning the random variables \(X\) and \(Y\) are independent. We can then simulate independently the random variables \(X\) and \(Y\), corresponding to the \(x\) and \(y\) coordinates of the points. But this would still require integrating and inverting the functions.

Markov chain Monte Carlo

A now standard way to simulate jointly distributed random variables is to use Markov chain Monte Carlo (MCMC), which we can also use to simulate the the \(X\) and \(Y\) random variables. Applying MCMC methods is simply applying random point process operations repeatedly to all the points. But this is a bit too tricky and involved. Instead I’ll use a general yet simpler method based on thinning.

Thinning

The thinning method is the arguably the simplest and most general way to simulate an inhomogeneous Poisson point process. If you’re unfamiliar with thinning, I recommend my previous post on thinning and the material I cite.

This simulation method is simply a type of acceptance-rejection method for simulating random variables. More specifically, it is the acceptance-rejection or rejection method, attributed to the great John von Neumann, for simulating a continuous random variable, say \(X\), with some known probability density \(f(x)\). The method accepts/retains or rejects/thins the outcome of each random variable/point depending on the outcome of a uniform random variable associated with each random variable/point.

The thinning or acceptance-rejection method is also appealing because it is an example of a perfect simulation method, which means the distribution of the simulated random variables or points will not be an approximation. This can be contrasted with typical MCMC methods, which, in theory, reach the desired distribution of the random variables in infinite time, which is clearly not possible in practice.

Simulating the homogeneous Poisson point process

First simulate a homogeneous Poisson point process with intensity value \(\lambda^*\), which is an upper bound of the intensity function \(\lambda(x,y)\). The simulation step is the easy part, but what value is \(\lambda^*\)?

I will use the maximum value that the intensity function \(\lambda(x,y)\) takes, which I denote by

$$ \lambda_{\max}:=\max_{(x,y)\in W}\lambda(x,y),$$

so I set \(\lambda^*=\lambda_{\max}\). Of course with \(\lambda^*\) being an upper bound, you can use any larger \(\lambda\)-value, so \(\lambda^*\geq \lambda_{\max}\), but that just means more points will need to be thinned.

Scientific programming languages have implemented algorithms that find or estimate minima of mathematical functions, meaning such an algorithm just needs to find the \((x,y)\) point that gives the minimum value of \(-\lambda(x,y)\), which corresponds to the maximum value of \(\lambda(x,y)\). What is very important is that the minimization procedure can handle constraints on the \(x\) and \(y\) values, which in our case of a rectangular simulation window \(W\) are sometimes called box constraints.

Thinning the Poisson point process

All we need to do now is to thin the homogeneous Poisson point process with the thinning probability function

$$1-p(x,y)=\frac{\lambda(x,y)}{\lambda^*}.$$

This will randomly remove the points so the remaining points will form a inhomogeneous Poisson point process with intensity function
$$ (1-p(x,y))\lambda^* =\lambda(x,y).$$
As a result, we can see that provided \(\lambda^*\geq \lambda_{\max}>0\), this procedure will give the right intensity function \(\lambda(x,y)\). I’ll skip the details on the point process still being Poisson after thinning, as I have already covered this in the thinning post.

Empirical check

You can run an empirical check by simulating the point process a large number (say \(10^3\) or \(10^4\)) of times, and collect statistics on the number of points. As the number of simulations increases, the average number of points should converge to the intensity measure \(\Lambda(W)\), which is given by (perhaps numerically) evaluating the integral

$$\Lambda(W)=\int_W \lambda(x,y)dxdy.$$

This is a test for the intensity measure, a type of first moment, which will work for the intensity measure of any point process. But for Poisson point processes, the variance of the number of points will also converge to intensity measure \(\Lambda(W)\), giving a second empirical test based on second moments.

An even more thorough test would be estimating an empirical distribution (that is, performing and normalizing a histogram) on the number of points. These checks will validate the number of points, but not the positioning of the points. In my next post I’ll cover how to perform these tests.

Results

The homogeneous Poisson point process with intensity function \(\lambda(x)=100\exp(-(x^2+y^2)/s^2)\), where \(s=0.5\). The results look similar to those in the thinning post, where the thinned points (that is, red circles) are generated from the same Poisson point process as the one that I have presented here.

MATLAB

Python

Method extensions

We can extend the thinning method for simulating inhomogeneous Poisson point processes a couple different ways.

Using an inhomogeneous Poisson point process

The thinning method does not need to be applied to a homogeneous Poisson point process with intensity \(\lambda^*\). In theory, we could have simulated a suitably inhomogeneous Poisson point process with intensity function \(\lambda^*(x,y)\), which has the condition

$$ \lambda^*(x,y)\geq \lambda(x,y), \quad \forall (x,y)\in W .$$

Then this Poisson point process is thinned. But then we would still need to simulate the underlying Poisson point process, which often would be as difficult to simulate.

Partitioning the simulation window

Perhaps the intensity of the Poisson point process only takes two values, \(\lambda_1\) and \(\lambda_2\), and the simulation window \(W\) can be nicely divided or partitioned into two disjoints sets \(B_1\) and \(B_2\) (that is, \(B_1\cap B_2=\emptyset\) and \(B_1\cup B_2=W\)), corresponding to the subregions of the two different intensity values. The Poisson independence property allows us to simulate two independent Poisson point processes on the two subregions.

This approach only works for a piecewise constant intensity function. But if if the intensity function \(\lambda(x)\) varies wildly, the simulation window can be partitioned into subregions \(B_1\dots,B_m\) for different ranges of the intensity function \(\lambda(x)\). This allows us to simulate independent homogeneous Poisson point processes with different densities \(\lambda^*_1\dots, \lambda^*_m\), where for each subregion \(B_i\) we set

$$ \lambda^*_i:=\max_{x\in B_i}\lambda(x,y).$$

The resulting Poisson point processes are then suitably thinned, resulting in a more efficient simulation method. (Although I imagine the gain would often be small.)

Non-rectangular simulation windows

If you want to simulate on non-rectangular regions, which is not a disk or triangle, then the easiest way is to simulate a Poisson point process on a rectangle \(R\) that completely covers the simulation window, so \(W \subset R\subset \mathbb{R}^2\), and then set the intensity function \(\lambda \) to zero for the region outside the simulation window \(W\), that is \(\lambda (x,y)=0\) when \((x,y)\in R\setminus W\).

Further reading

In Section 2.5.2 of Stochastic Geometry and its Applications by Chiu, Stoyan, Kendall and Mecke, there is an outline of the thinning method that I used. The same simulation section appears in the previous edition by Kendall and Mecke, though these books in general have little material on simulation methods.

More details on the thinning method and its connection to acceptance-rejection sampling are given in Section 2.3 of the applications-oriented book Poisson Point Processes by Streit. The acceptance-rejection method is covered in, for example, books on Monte Carlo methods, including Monte Carlo Strategies in Scientific Computing by Liu (in Section 2.2 )and Monte Carlo Methods in Financial Engineering by Glasserman (in Section 2.2.2). This method and others for simulating generals random variable are covered in stochastic simulation books such as Uniform Random Variate Generation by Devroye and Stochastic Simulation: Algorithms and Analysis by Asmussen and Glynn.

Kroese and Botev have a good introduction to stochastic simulation in the edited collection Stochastic Geometry, Spatial Statistics and Random Fields : Models and Algorithms by Schmidt, where the relevant chapter (number 12) is also freely available online. And of course there are lectures notes on the internet that cover simulation material.

Code

All code from my posts, as always, can be found on the my GitHub repository. The code for this post is located here. You can see that the code is very similar to that of the thinning code, which served as the foundation for this code. (Note how we now keep the points, so in the code the > has become < on the line where the uniform variables are generated).

I have implemented the code in MATLAB and Python with an intensity function \(\lambda(x,y)=100\exp(-(x^2+y^2)/s^2)\), where \(s>0\) is a scale parameter. Note that in the minimization step, the box constraints are expressed differently in MATLAB and Python: MATLAB first takes the minimum values then the maximum values, whereas Python first takes the \(x\)-values then the \(y\)-values.

The code presented here does not have the empirical check, which I described above, but it is implemented in the code located here. For the parameters used in the code, the total measure is \(\Lambda(W)\approx 77.8068\), meaning each simulation will generate on average almost seventy-eight points.

I have stopped writing code in R for a couple of reasons, but mostly because anything I could think of simulating in R can already be done in the spatial statistics library spatstat. I recommend the book Spatial Point Patterns, co-authored by the spatstat’s main contributor, Adrian Baddeley.

MATLAB

I have used the fmincon function to find the point that gives the minimum of \(-\lambda(x,y)\).

%Simulation window parameters
xMin=-1;xMax=1;
yMin=-1;yMax=1;
xDelta=xMax-xMin;yDelta=yMax-yMin; %rectangle dimensions
areaTotal=xDelta*yDelta; %area of rectangle

s=0.5; %scale parameter

%Point process parameters
fun_lambda=@(x,y)(100*exp(-((x).^2+(y).^2)/s^2));%intensity function

%%%START -- find maximum lambda -- START %%%
%For an intensity function lambda, given by function fun_lambda,
%finds the maximum of lambda in a rectangular region given by
%[xMin,xMax,yMin,yMax].
funNeg=@(x)(-fun_lambda(x(1),x(2))); %negative of lambda
%initial value(ie centre)
xy0=[(xMin+xMax)/2,(yMin+yMax)/2];%initial value(ie centre)
%Set up optimization step
options=optimoptions('fmincon','Display','off');
%Find largest lambda value
[~,lambdaNegMin]=fmincon(funNeg,xy0,[],[],[],[],...
[xMin,yMin],[xMax,yMax],'',options);
lambdaMax=-lambdaNegMin;
%%%END -- find maximum lambda -- END%%%

%define thinning probability function
fun_p=@(x,y)(fun_lambda(x,y)/lambdaMax);

%Simulate Poisson point process
numbPoints=poissrnd(areaTotal*lambdaMax);%Poisson number of points
xx=xDelta*(rand(numbPoints,1))+xMin;%x coordinates of Poisson points
yy=xDelta*(rand(numbPoints,1))+yMin;%y coordinates of Poisson points

%calculate spatially-dependent thinning probabilities
p=fun_p(xx,yy);

%Generate Bernoulli variables (ie coin flips) for thinning
booleRetained=rand(numbPoints,1)<p; %points to be thinned

%x/y locations of retained points
xxRetained=xx(booleRetained); yyRetained=yy(booleRetained);

%Plotting
plot(xxRetained,yyRetained,'bo'); %plot retained points
xlabel('x');ylabel('y');

The box constraints for the optimization step were expressed as:

[xMin,yMin],[xMax,yMax]
Python

I have used the minimize function in SciPy.

import numpy as np; #NumPy package for arrays, random number generation, etc
import matplotlib.pyplot as plt #For plotting
from scipy.optimize import minimize #For optimizing
from scipy import integrate

#Simulation window parameters
xMin=-1;xMax=1;
yMin=-1;yMax=1;
xDelta=xMax-xMin;yDelta=yMax-yMin; #rectangle dimensions
areaTotal=xDelta*yDelta;

s=0.5; #scale parameter

#Point process parameters
def fun_lambda(x,y):
return 100*np.exp(-(x**2+y**2)/s**2); #intensity function

###START -- find maximum lambda -- START ###
#For an intensity function lambda, given by function fun_lambda,
#finds the maximum of lambda in a rectangular region given by
#[xMin,xMax,yMin,yMax].
def fun_Neg(x):
return -fun_lambda(x[0],x[1]); #negative of lambda

xy0=[(xMin+xMax)/2,(yMin+yMax)/2];#initial value(ie centre)
#Find largest lambda value
resultsOpt=minimize(fun_Neg,xy0,bounds=((xMin, xMax), (yMin, yMax)));
lambdaNegMin=resultsOpt.fun; #retrieve minimum value found by minimize
lambdaMax=-lambdaNegMin;
###END -- find maximum lambda -- END ###

#define thinning probability function
def fun_p(x,y):
return fun_lambda(x,y)/lambdaMax;

#Simulate a Poisson point process
numbPoints = np.random.poisson(lambdaMax*areaTotal);#Poisson number of points
xx = np.random.uniform(0,xDelta,((numbPoints,1)))+xMin;#x coordinates of Poisson points
yy = np.random.uniform(0,yDelta,((numbPoints,1)))+yMin;#y coordinates of Poisson points

#calculate spatially-dependent thinning probabilities
p=fun_p(xx,yy);

#Generate Bernoulli variables (ie coin flips) for thinning
booleRetained=np.random.uniform(0,1,((numbPoints,1)))<p; #points to be thinned

#x/y locations of retained points
xxRetained=xx[booleRetained]; yyRetained=yy[booleRetained];

#Plotting
plt.scatter(xxRetained,yyRetained, edgecolor='b', facecolor='none', alpha=0.5 );
plt.xlabel("x"); plt.ylabel("y");
plt.show();

The box constraints were expressed as:

(xMin, xMax), (yMin, yMax)
Julia

After writing this post, I later wrote the code in Julia. The code is here and my thoughts about Julia are here.

Thinning point processes

One way to create new point processes is to apply thinning to a point process. As I mentioned in a previous post on point process operations, thinning is a random operation applied to the points of an underlying point process, where the points are thinned (or removed) or retained (or kept) according to some probabilistic rule. Both the thinned and retained points form two separate point processes, but one usually focuses on the retained points. Given an underlying point process, the nature of the thinning rule will result in different types of point processes.

As I detailed in the Applications section below, thinning can be used to simulate an inhomogeneous Poisson point process, as I covered in another post.

Thinning types

Thinning can be statistically independent or dependent, meaning that the probability of thinning any point is either independent or dependent of thinning other points. The more tractable case is statistically independent thinning, which is the thinning type covered here. We can further group this thinning into two types based on whether the thinning rule depends on the locations of the point. (I use the word location, instead of point, to refer to where a point of a point process is located on the underlying mathematical space on which the point process is defined.)

Spatially independent thinning

The simplest thinning operation is one that does not depend on point locations. This thinning is sometimes referred to as \(p\)-thinning, where the constant \(p\) has the condition \(0\leq p \leq 1\) because it is the probability of thinning a single point. Simply put, the probability of a point being thinned does not depend on the point locations.

Example

We can liken the thinning action to flipping a biased coin with probability of \(p\) for heads (or tails) for each point of the underlying point process, and then removing the point if a head (or tails) occurs. If there were a constant number \(n\) of points of the underlying point process, then the number of thinned (or retained) points will form a binomial random variable with parameters \(n\) and \(p\) (or \(1-p\)).

Simulation

Simulating this thinning operation is rather straightforward. Given a realization of a point process, for each point, simply generate or simulate a uniform random variable on the interval \((0,1)\), and if this random variable is less than \(p\), remove the point. (This is simply sampling a Bernoulli distribution, which is covered in this post.)

In the code section below, I have shown how this thinning operation is implemented.

Spatially dependent thinning

To generalize the idea of \(p\)-thinning, we can simply require that the thinning probability of any point depends on its location \(x\), which gives us the concept of \(p(x)\)-thinning. (I write a single \(x\) to denote a point on the plane, that is \(x\in \mathbb{R}^2\), instead of writing, for example, the \(x\) and \(y\) and coordinates separately.) More precisely, the probability of thinning a point is given by a function \(p(x)\) such that \(0 \leq p(x)\leq 1\), but all point thinnings occur independently of each other. In other words, this is a spatially dependent thinning that is statistically independent.

Example

I’ll illustrate the concept of (statistically independent) spatially dependent thinning with a somewhat contrived example. We assume that the living locations of all the people in the world form a point process on a (slightly squashed) sphere. Let’s say that Earth has become overpopulated, particularly in the Northern Hemisphere, so we decide to randomly choose people and send them off to another galaxy, but we do it based on how far they live from the North Pole. The thinning rule could be, for example, \(p(x)= \exp(- |x|^2/s^2)\), where \(|x|\) is the distance to the North Pole and \(s>0\) is some constant for distance scaling.

Put another way, a person at location \(x\) flips a biased coin with the probability of heads being equal to \(p(x)=\exp(- |x|^2/s^2)\). If a head comes up, then that person is removed from the planet. With the maximum of \(p(x)\) is at the North Pole, we can see that the lucky (or unlucky?) people in countries like Australia, New Zealand (or Aotearoa), South Africa, Argentina and Chile, are more likely not to be sent off (that is, thinned) into the great unknown.

For people who live comparable distances from the North Pole, the removal probabilities are similar in value, yet the events of being remove remain independent. For example, the probabilities of removing any two people from the small nation Lesotho are similar in value, but these two random events are still completely independent of each other.

Simulation

Simulating a spatially dependent thinning is just slightly more involved than the spatially independent case. Given a realization of a point process, for each point at, say, \(x\), simply generate or simulate a uniform random variable on the interval \((0,1)\), and if this random variable is less than \(p(x)\), remove the point.

In the code section, I have shown how this thinning operation is implemented with an example like the above one, but on a rectangular region of Cartesian space. In this setting, the maximum of \(p(x)\) is at the origin, resulting in more points being thinned in this region.

Thinning a Poisson point process

Perhaps not surprisingly, under the thinning operation the Poisson point process exhibits a closure property, meaning that a Poisson point process thinned in a certain way gives another Poisson point process.  More precisely, if the thinning operation is statistically independent, then the resulting point process formed from the retained points is also a Poisson point process, regardless if it is spatially independent or dependent thinning. The resulting intensity (interpreted as the average density of points) of this new Poisson point process has a simple expression.

Homogeneous case

For a spatially independent \(p\)-thinning, if the original (or underlying) Poisson point process is homogeneous with intensity \(\lambda\), then the point process formed from the retained points is a homogeneous Poisson point process with intensity \(\lambda\).  (There are different ways to prove this, but you can gain some intuition behind the proof by conditioning on the Poisson number of points and then applying the total law of probability. Using generating functions helps.)

Inhomogeneous case

More generally, if we apply a spatially dependent \(p(x)\)-thinning to a  Poisson point process has a intensity \(\lambda\), then the retained points form a  an inhomogeneous or nonhomogeneous Poisson point process with \(\lambda p(x)\),  due to the spatial dependence in the thinning function \(p(x)\). This gives a way to simulate such Poisson point processes, which I’ll cover in another post.

Splitting

We can see by symmetry that if we look at the thinned points, then the resulting point process is also a Poisson point process, but with intensity \((1-p(x))\lambda\). The retained and thinned points both form Poisson point processes, but what is really interesting is these two point processes are independent of each other.  This means that any random configuration that occurs among the retained points is independent of any configurations among the thinned points.

This ability to split a Poisson point processes into independent ones is sometimes called the splitting property.

Applications

Thinning point processes has the immediate application of creating new point processes. It can also be used to randomly generate two point processes from one. In network applications, a simple example is using the thinning procedure to model random sleep schemes in wireless networks, where random subsets of the network have been powered down.

Perhaps the most useful application of thinning is creating point processes with spatially-dependent intensities such that of an inhomogeneous Poisson point process. In another post I give details on how to simulate this point process.  In this setting, the thinning operation essentially is acceptance(-rejection) sampling, which I will cover in a future post.

Code

All code from my posts, as always, can be found on the my GitHub repository. The code for this post is located here.

Spatially independent thinning

I have implemented in code the simple \(p\)-thinning operation applied to a Poisson point process on a rectangle, but in theory any point process can be used for the underlying point process that is thinned.

MATLAB

%Simulation window parameters
xMin=-1;xMax=1;
yMin=-1;yMax=1;
xDelta=xMax-xMin;yDelta=yMax-yMin; %rectangle dimensions
areaTotal=xDelta*yDelta; %area of rectangle

%Point process parameters
lambda=100; %intensity (ie mean density) of the Poisson process

%Thinning probability parameters
sigma=1;
p=0.25; %thinning probability

%Simulate Poisson point process
numbPoints=poissrnd(areaTotal*lambda);%Poisson number of points
xx=xDelta*(rand(numbPoints,1))+xMin;%x coordinates of Poisson points
yy=xDelta*(rand(numbPoints,1))+yMin;%y coordinates of Poisson points

%Generate Bernoulli variables (ie coin flips) for thinning
booleThinned=rand(numbPoints,1)&amp;amp;gt;p; %points to be thinned
booleRetained=~booleThinned; %points to be retained

%x/y locations of thinned points
xxThinned=xx(booleThinned); yyThinned=yy(booleThinned);
%x/y locations of retained points
xxRetained=xx(booleRetained); yyRetained=yy(booleRetained);

%Plotting
plot(xxRetained,yyRetained,'bo'); %plot retained points
hold on; plot(xxThinned,yyThinned,'ro'); %plot thinned points
xlabel('x');ylabel('y');

R

#Simulation window parameters
xMin=-1;xMax=1;
yMin=-1;yMax=1;
xDelta=xMax-xMin;yDelta=yMax-yMin; #rectangle dimensions
areaTotal=xDelta*yDelta;

#Point process parameters
lambda=100; #intensity (ie mean density) of the Poisson process

#Thinning probability
p=0.25; 

#Simulate a Poisson point process
numbPoints=rpois(1,areaTotal*lambda);#Poisson number of points
xx=xDelta*runif(numbPoints)+xMin;#x coordinates of Poisson points
yy=xDelta*runif(numbPoints)+yMin;#y coordinates of Poisson points

#Generate Bernoulli variables (ie coin flips) for thinning
booleThinned=runif(numbPoints)&amp;amp;gt;p; #points to be thinned
booleRetained=!booleThinned; #points to be retained

#x/y locations of thinned points
xxThinned=xx[booleThinned]; yyThinned=yy[booleThinned];
#x/y locations of retained points
xxRetained=xx[booleRetained]; yyRetained=yy[booleRetained];

#Plotting
par(pty="s")
plot(xxRetained,yyRetained,'p',xlab='x',ylab='y',col='blue'); #plot retained points
points(xxThinned,yyThinned,col='red'); #plot thinned points

Of course, as I have mentioned before, simulating a spatial point processes in R is even easier with the powerful spatial statistics library spatstat.  With this library, thinning can be done in quite a general way by using the function rthin.

Python

import numpy as np; #NumPy package for arrays, random number generation, etc
import matplotlib.pyplot as plt

#Simulation window parameters
xMin=-1;xMax=1;
yMin=-1;yMax=1;
xDelta=xMax-xMin;yDelta=yMax-yMin; #rectangle dimensions
areaTotal=xDelta*yDelta;

#Point process parameters
lambda0=100; #intensity (ie mean density) of the Poisson process

#Thinning probability
p=0.25; 

#Simulate a Poisson point process
numbPoints = np.random.poisson(lambda0*areaTotal);#Poisson number of points
xx = np.random.uniform(0,xDelta,((numbPoints,1)))+xMin;#x coordinates of Poisson points
yy = np.random.uniform(0,yDelta,((numbPoints,1)))+yMin;#y coordinates of Poisson points

#Generate Bernoulli variables (ie coin flips) for thinning
booleThinned=np.random.uniform(0,1,((numbPoints,1)))&amp;amp;gt;p; #points to be thinned
booleRetained=~booleThinned; #points to be retained

#x/y locations of thinned points
xxThinned=xx[booleThinned]; yyThinned=yy[booleThinned];
#x/y locations of retained points
xxRetained=xx[booleRetained]; yyRetained=yy[booleRetained];

#Plotting
plt.scatter(xxRetained,yyRetained, edgecolor='b', facecolor='none', alpha=0.5 );
plt.scatter(xxThinned,yyThinned, edgecolor='r', facecolor='none', alpha=0.5 );
plt.xlabel("x"); plt.ylabel("y");
plt.show(); 
Spatially dependent thinning

I have implemented in code a \(p(x)\)-thinning operation with the function \(p(x)=\exp(-|x|^2/s^2)\), where \(|x|\) is the Euclidean distance from \(x\) to the origin. This small changes means that in the code there will be a vector or array of \(p\) values instead of a single \(p\) value in the section where the uniform random variables are generated and compared said \(p\) values.  (Lines 24, 26 and 28 respectively in the MATLAB, R and Python code presented below.)

Again, I have applied thinning to a Poisson point process on a rectangle, but in theory any point process can be used for the underlying point process.

MATLAB

%Simulation window parameters
xMin=-1;xMax=1;
yMin=-1;yMax=1;
xDelta=xMax-xMin;yDelta=yMax-yMin; %rectangle dimensions
areaTotal=xDelta*yDelta; %area of rectangle
 
%Point process parameters
lambda=100; %intensity (ie mean density) of the Poisson process

%Thinning probability parameters
sigma=0.5; %scale parameter for thinning probability function
%define thinning probability function
fun_p=@(s,x,y)(exp(-(x.^2+y.^2)/s^2)); 

%Simulate Poisson point process
numbPoints=poissrnd(areaTotal*lambda);%Poisson number of points
xx=xDelta*(rand(numbPoints,1))+xMin;%x coordinates of Poisson points
yy=xDelta*(rand(numbPoints,1))+yMin;%y coordinates of Poisson points

%calculate spatially-dependent thinning probabilities
p=fun_p(sigma,xx,yy); 

%Generate Bernoulli variables (ie coin flips) for thinning
booleThinned=rand(numbPoints,1)&amp;amp;gt;p; %points to be thinned
booleRetained=~booleThinned; %points to be retained

%x/y locations of thinned points
xxThinned=xx(booleThinned); yyThinned=yy(booleThinned);
%x/y locations of retained points
xxRetained=xx(booleRetained); yyRetained=yy(booleRetained);

%Plotting
plot(xxRetained,yyRetained,'bo'); %plot retained points
hold on; plot(xxThinned,yyThinned,'ro'); %plot thinned points
xlabel('x');ylabel('y');

R

#Simulation window parameters
xMin=-1;xMax=1;
yMin=-1;yMax=1;
xDelta=xMax-xMin;yDelta=yMax-yMin; #rectangle dimensions
areaTotal=xDelta*yDelta;

#Point process parameters
lambda=100; #intensity (ie mean density) of the Poisson process

#Thinning probability parameters
sigma=0.5; #scale parameter for thinning probability function
#define thinning probability function
fun_p &amp;amp;lt;- function(s,x,y) {
  exp(-(x^2 + y^2)/s^2);
}

#Simulate a Poisson point process
numbPoints=rpois(1,areaTotal*lambda);#Poisson number of points
xx=xDelta*runif(numbPoints)+xMin;#x coordinates of Poisson points
yy=xDelta*runif(numbPoints)+yMin;#y coordinates of Poisson points

#calculate spatially-dependent thinning probabilities
p=fun_p(sigma,xx,yy); 

#Generate Bernoulli variables (ie coin flips) for thinning
booleThinned=runif(numbPoints)&amp;amp;lt;p; #points to be thinned
booleRetained=!booleThinned; #points to be retained

#x/y locations of thinned points
xxThinned=xx[booleThinned]; yyThinned=yy[booleThinned];
#x/y locations of retained points
xxRetained=xx[booleRetained]; yyRetained=yy[booleRetained];

#Plotting
par(pty="s")
plot(xxRetained,yyRetained,'p',xlab='x',ylab='y',col='blue'); #plot retained points
points(xxThinned,yyThinned,col='red'); #plot thinned points

Again, use the spatial statistics library spatstat, which has the function rthin.

Python

import numpy as np; #NumPy package for arrays, random number generation, etc
import matplotlib.pyplot as plt

#Simulation window parameters
xMin=-1;xMax=1;
yMin=-1;yMax=1;
xDelta=xMax-xMin;yDelta=yMax-yMin; #rectangle dimensions
areaTotal=xDelta*yDelta;

#Point process parameters
lambda0=100; #intensity (ie mean density) of the Poisson process

#Thinning probability parameters
sigma=0.5; #scale parameter for thinning probability function
#define thinning probability function
def fun_p(s, x, y):
    return np.exp(-(x**2+y**2)/s**2);    

#Simulate a Poisson point process
numbPoints = np.random.poisson(lambda0*areaTotal);#Poisson number of points
xx = np.random.uniform(0,xDelta,((numbPoints,1)))+xMin;#x coordinates of Poisson points
yy = np.random.uniform(0,yDelta,((numbPoints,1)))+yMin;#y coordinates of Poisson points

#calculate spatially-dependent thinning probabilities
p=fun_p(sigma,xx,yy); 

#Generate Bernoulli variables (ie coin flips) for thinning
booleThinned=np.random.uniform(0,1,((numbPoints,1)))&amp;amp;gt;p; #points to be thinned
booleRetained=~booleThinned; #points to be retained

#x/y locations of thinned points
xxThinned=xx[booleThinned]; yyThinned=yy[booleThinned];
#x/y locations of retained points
xxRetained=xx[booleRetained]; yyRetained=yy[booleRetained];

#Plotting
plt.scatter(xxRetained,yyRetained, edgecolor='b', facecolor='none', alpha=0.5 );
plt.scatter(xxThinned,yyThinned, edgecolor='r', facecolor='none', alpha=0.5 );
plt.xlabel("x"); plt.ylabel("y");
plt.show(); 

Results

In the plotted results, the blue and red circles represent respectively the retained and thinned points.

Spatially independent thinning

For these results, I used a thinning probability \(p=0.25\), which means that roughly a quarter of the points will be thinned, so on average the ratio of blue to red circles is three to one.

MATLAB

R

Python

Spatially dependent thinning

Observe how there are more thinned points (that is, red circles) near the origin, which is of course where the thinning function \(p(x)=\exp(-|x|^2/s^2)\) attains its maximum.

MATLAB

R

Python

Further reading

The thinning operation is covered in Stochastic Geometry and its Applications by Chiu, Stoyan, Kendall and Mecke (Chapter 5). It is also covered in the book Statistical Inference and Simulation for Spatial Point Processes by Moller and Waagepetersen (Section 3.2.2). Kallenberg presents a more theoretical and rigorous take on thinning Poisson point processes in his new book Random Measures, Theory and Applications (Chapter 3). (A point process can be interpreted as a type of random measure called a random counting measure because it gives the random number of points in a set.)

Thinning is also covered in books that apply point processes to wireless networks, such as Stochastic Geometry and Wireless Networks by Baccelli and Błaszczyszyn (Volume 1, Section 1.3.2) or Stochastic Geometry for Wireless Networks (Section 2.7.3) by Haenggi. These books also give examples of thinning point processes for wireless network applications.

 

Beyond the Poisson point process

As great as the Poisson point process is — and it is pretty great — it is sadly not always suitable for mathematical models. The tractability of this point process is due to the independence of the locations of its points. Informally, this means that point locations of  a Poisson point process in any region will not affect the probability of finding other points in some other region. But such independence may not be true or even approximately true when trying to develop a mathematical model for certain phenomena.

Clustering and Repulsion

One can quickly think of examples where the Poisson point process is not a suitable model. For example, if a star is part of a galaxy, then it is more likely that another star will be located nearby. Conversely, given the location of a tree in the forest, then usually it is less likely to then find another tree relatively nearby, because trees need a certain amount of land to draw water from the earth.  In the language of point processes, we say that the stars tend to show clustering, while the trees tend to show repulsion.

To better model phenomena like like trees and stars, we can use point processes that also exhibit the properties of clustering and repulsion. In fact, a good part of spatial statistics has been dedicated to developing statistical tools for testing if repulsion or clustering exists in observed point patterns, which is the spatial statistics term used for samples of objects that can be represented as points in space. (A point process is a random object, so a single realization or outcome of a point process is an example of a point pattern.)

The Poisson point process lies halfway between these two categories, meaning that its points show an equal degree of clustering and repulsion. Mathematically, this can be made more formal by, for example, using something called factorial moment measures, which are mathematical objects used to study point processes.

For probability applications, Błaszczyszyn and Yogeshwaran developed a framework using factorial moment measures, which allowed them to classify point process into what they called super-Poisson and sub-Poisson, referring respectively to point processes with points that tend to cluster and repel more.

Point Process Operations

If a Poisson point processes is not suitable for certain models, then we need to develop and use other point processes that exhibit clustering or repulsion. Fortunately, one way to develop such point processes is to apply certain point process operations to Poisson and point processes in general. For developing new point processes, researchers have largely studied three types types of point process operations: thinning, superposition, and clustering. (But there are other operations one can apply to a point process such as randomly moving the points.)

Thinning

To apply the thinning operation means to use some rule for selectively removing points from a point process \(\Phi\) to form a new point process \(\Phi_p\). A rule may be purely random such as the rule known as \(p\)-thinning. For this rule, each point of \(\Phi\) is independently removed (or kept) with some probability \(p\) (or \(1-p\)). This thinning method can be likened to looking at each point, flipping a biased coin with probability \(p\) for heads, and removing the point if a head occurs.

This rule may be generalized by introducing a non-negative function \(p(x)\leq 1\), where \(x\) is a point in the space on which the point process is defined.  This allows us to define a location-dependent \(p(x)\)-thinning, where now the probability of a point being removed is \(p(x)\) and is dependent on where the point \(x\) of \(\Phi\) is located on the underlying space.

The thinning operation is very useful, and I will write more about it in another post, including some examples implemented in code.

Superposition

The superposition of two or more point processes simply means taking the union of two or more point processes. (Point processes can be considered as random sets, which is why point process notation consists of notation from set theory, as well as other mathematical branches.)

More formally, if there is a countable collection of point processes \(\Phi_1,\Phi_2\dots\), then their superposition
\[
\Phi=\bigcup_{i=1}^{\infty}\Phi_i,
\]
also forms a point process. If the point processes are all independent and Poisson, then the superposition will be another Poisson point process, meaning we have not produced a new point process.

Clustering

Related to superposition is a point operation known as clustering, which entails replacing every point \(x\) in a given point process \(\Phi\) with a cluster of points \(N^x\). Each cluster is also a point process, but with a finite number of points. The union of all the clusters forms a cluster point process,  that is
\[
\Phi_c=\bigcup_{x\in \Phi}N^x.
\]

In two previous blogs I have already used this point process operation to construct the Matérn and Thomas (cluster) point processes, which both involve using an underlying Poisson point process. Each point of this point process was assigned a Poisson random number of points, and then the points were uniformly scattered on a disk (for Matérn) or scattered according to a two-dimensional normal distribution (for Thomas). They are members of a family of point processes called Neyman-Scott point processes.

Clustering or repulsion?

I mentioned earlier that in spatial statistics there are statistical tools for testing if clustering or repulsion exists in observed point patterns, usually by comparing it to the Poisson point process, which often serves as a benchmark. For example, in spatial statistics the second factorial moment measure is used for the descriptive statistic called Ripley’s \(K\)-function and its rescaled version, Ripley’s \(L\)-function. Keeping with the alphabetical theme, another example of such a statistic is the \(J\)-function, which was introduced by Van Lieshout and Baddeley.

Further reading

For spatial statistics in general, I always recommend the book Spatial Point Patterns: Methodology and Applications with R written by spatial statistics experts Baddeley, Rubak and Turner, which covers the spatial statistics (and point process simulation) R-package spatstat. This book covers statistically testing point patterns to see if they exhibit more clustering or repulsion, with details for the relevant functions in  spatstat.  For an introduction on factorial moment framework by Błaszczyszyn and Yogeshwaran for clustering and repulsion comparison, see this chapter from a published collection of of lecture notes.

In Chapter 5 of the classic text Stochastic Geometry and its Applications by Chiu, Stoyan, Kendall and Meck, the point process operations thinning are described and used to construct new point processes.  The similar material is covered in the previous edition by Stoyan, Kendall and Mecke.

On a more theoretical level, operations of point processes are covered in the second volume of An Introduction to the Theory of Point Processes by Daley and Vere-Jones. (These convergence results are now a little dated, because there are now various convergence results for point processes based on Stein’s method, which are not included in the book.)

See the previous posts for details and citations on the  Matérn and Thomas (cluster) point processes.

Simulating a Thomas cluster point process

Sometimes with just a little tweaking of a point process, you can get a new  point process. An example of this is the Thomas point process, which is a type of cluster point process, meaning that its randomly located points tend to form random clusters.  This point process is an example of a family of cluster point processes known as Neyman-Scott point processes, which have been used as models in spatial statistics and telecommunications. If that sounds familiar, that is because this point process is very similar to the Matérn point cluster process, which I covered in the previous post

The only difference between the two point processes is how the points are randomly located. In each cluster of a Thomas point process, each individual point is located according to two independent zero-mean normal variables with variance \(\sigma^2\), describing the \(x\) and \(y\) coordinates relative to the cluster centre, whereas each point of a Matérn point process is located uniformly in a disk.

Working in polar coordinates, an equivalent way to simulate a Thomas point process is to use independent and identically-distributed Rayleigh random variables for the radial (or \(\rho\)) coordinates, instead of using random variables with a triangular distribution, which are used to simulate the Matérn point process. This method works because in polar coordinates a uniform random variable for the angular (or \(\theta\) ) coordinate and a Rayleigh random variable for the angular (or \(\rho\)) is equivalent to in Cartesian coordinates two independent zero-mean normal variables. This is exactly the trick behind the Box-Muller transform for generating normal random variables using just uniform random variables.

If you’re familiar with simulating the Matérn point process, the most difference is what size to make the simulation window for the parents points. I cover that in the next section.  

Overview

Simulating a Thomas cluster point process requires first simulating a homogeneous Poisson point process with intensity \(\lambda>0\) on some simulation window, such as a rectangle, which is the simulation window I will use here. Then for each point of this underlying point process, simulate a Poisson number of points with mean \(\mu>0\), and for each point simulate two independent zero-mean normal variables with variance \(\sigma^2\), corresponding to the (relative) Cartesian coordinates .

The underlying  point process is sometimes called the parent (point) process, and its points are centres of the cluster disks. The subsequent point process on all the disks is called daughter (point) process and it forms the clusters. I have already written about simulating the homogeneous Poisson point processes on a rectangle and a disk, so  those posts are good starting points, and I will not focus  too much on details for these steps steps.

Importantly, like the Matérn point process, it’s possible for daughter points to appear in the simulation window that come from parents points outside the simulation window. To handle these edge effects, the point processes must be first simulated on an extended version of the simulation window. Then only the daughter points within the simulation window are kept and the rest are removed.  

We can add a strip of some width \(d\) all around the simulation window. But what value does \(d\) take? Well, in theory, it is possible that a daughter point comes from a parent point that is very far from the simulation window. But that probability becomes vanishingly small as the distance increases, due to the daughter points being located according to zero-mean normal random variables. 

For example, if a single parent is at a distance \(d=6 \sigma\) from the simulation, then there is about a $1/1 000 000 000$ chance that a single daughter point will land in the simulation window. The probability is simply \(1-\Phi(6 \sigma)\), where \(\Phi\) is the cumulative distribution function of a normal variable with zero mean and standard deviation \(\sigma>0\).  This is what they call a six sigma event.  In my code, I set \(d=6 \sigma\), but \(d=4 \sigma\) is good enough, which is the value that the R library spatstat uses by default.  

Due to this approximation, this simulation cannot be called a perfect simulation, despite the approximation being highly accurate. In practice, it will not have no measurable effect on simulation results, as the number of simulations will rarely be high enough for (hypothetical) daughter points to come from (hypothetical) parent points outside the window. 

Steps

Number of points

Simulate the underlying or parent Poisson point process on the rectangle with \(N_P\) points. Then for each point, simulate a Poisson number of points, where each disk now has \(D_i\) number of points. Then the total number of points is simply \(N=D_1+\dots +D_{P}=\sum_{i=1}^{N_P}D_i \). The random variables \(P\) and \(D_i\) are Poisson random variables with respective means \(\lambda A\) and \(\mu\), where \(A\) is the area of the rectangular simulation window. To simulate these random variables in MATLAB, use the poissrnd function. To do this in R, use the  standard function rpois. In Python, we can use either functions scipy.stats.poisson or numpy.random.poisson from the SciPy or NumPy libraries.

Locations of points

The points of the parent point process are randomly positioned by using Cartesian coordinates. For a homogeneous Poisson point process, the \(x\) and \(y\) coordinates of each point are independent uniform points, which is also the case for the binomial point process, covered in an earlier post.

As mentioned in the introduction of this post, the points of all the daughter point process are randomly positioned by either using polar  coordinates or Cartesian coordinates, due to the Box-Muller transform. But because we ultimately convert back to Cartesian coordinates (for example, to plot the points), we will work entirely in this coordinate system.  Each point is then simply positioned with two independent zero-mean normal random variables, representing the \(x\) and \(y\) coordinates relative to the original parent point.

Shifting all the points in each cluster disk

In practice (that is, in the code), all the daughter points are simulated relative to the origin. Then for each cluster disk, all the points need to be shifted, so the origin coincides with the parent point, which completes the simulation step.

To use vectorization in the code,  the coordinates of each cluster point are repeated by the number of daughters in the corresponding cluster by using the functions repelem in MATLAB, rep in R, and repeat in Python. 

Code

I have implemented the simulation procedure in MATLAB, R and Python, which as usual are all very similar. The code can be downloaded here.

MATLAB
 %Simulation window parameters xMin=-.5; xMax=.5; yMin=-.5; yMax=.5; %Parameters for the parent and daughter point processes lambdaParent=10;%density of parent Poisson point process lambdaDautgher=100;%mean number of points in each cluster sigma=0.05;%sigma for normal variables (ie random locations) of daughters %Extended simulation windows parameters rExt=7*sigma; %extension parameter -- use factor of deviation %for rExt, use factor of deviation sigma eg 6 or 7 xMinExt=xMin-rExt; xMaxExt=xMax+rExt; yMinExt=yMin-rExt; yMaxExt=yMax+rExt; %rectangle dimensions xDeltaExt=xMaxExt-xMinExt; yDeltaExt=yMaxExt-yMinExt; areaTotalExt=xDeltaExt*yDeltaExt; %area of extended rectangle %Simulate Poisson point process for the parents numbPointsParent=poissrnd(areaTotalExt*lambdaParent,1,1);%Poisson number %x and y coordinates of Poisson points for the parent xxParent=xMinExt+xDeltaExt*rand(numbPointsParent,1); yyParent=yMinExt+yDeltaExt*rand(numbPointsParent,1); %Simulate Poisson point process for the daughters (ie final poiint process) numbPointsDaughter=poissrnd(lambdaDautgher,numbPointsParent,1); numbPoints=sum(numbPointsDaughter); %total number of points %Generate the (relative) locations in Cartesian coordinates by %simulating independent normal variables xx0=normrnd(0,sigma,numbPoints,1); yy0=normrnd(0,sigma,numbPoints,1); %replicate parent points (ie centres of disks/clusters) xx=repelem(xxParent,numbPointsDaughter); yy=repelem(yyParent,numbPointsDaughter); %translate points (ie parents points are the centres of cluster disks) xx=xx(:)+xx0; yy=yy(:)+yy0; %thin points if outside the simulation window booleInside=((xx&amp;amp;amp;amp;amp;gt;=xMin)&amp;amp;amp;amp;amp;amp;(xx&amp;amp;amp;amp;amp;lt;=xMax)&amp;amp;amp;amp;amp;amp;(yy&amp;amp;amp;amp;amp;gt;=yMin)&amp;amp;amp;amp;amp;amp;(yy&amp;amp;amp;amp;amp;lt;=yMax)); %retain points inside simulation window xx=xx(booleInside); yy=yy(booleInside); %Plotting scatter(xx,yy); 
R

Note: it is a bit tricky to write “<-” in the R code (as it automatically changes to the html equivalent in the HTML editor I am using), so I have usually used “=” instead of the usual “<-”.

 #Simulation window parameters xMin=-.5; xMax=.5; yMin=-.5; yMax=.5; #Parameters for the parent and daughter point processes lambdaParent=10;#density of parent Poisson point process lambdaDaughter=100;#mean number of points in each cluster sigma=0.05; #sigma for normal variables (ie random locations) of daughters #Extended simulation windows parameters rExt=7*sigma; #extension parameter #for rExt, use factor of deviation sigma eg 6 or 7 xMinExt=xMin-rExt; xMaxExt=xMax+rExt; yMinExt=yMin-rExt; yMaxExt=yMax+rExt; #rectangle dimensions xDeltaExt=xMaxExt-xMinExt; yDeltaExt=yMaxExt-yMinExt; areaTotalExt=xDeltaExt*yDeltaExt; #area of extended rectangle #Simulate Poisson point process for the parents numbPointsParent=rpois(1,areaTotalExt*lambdaParent);#Poisson number of points #x and y coordinates of Poisson points for the parent xxParent=xMinExt+xDeltaExt*runif(numbPointsParent); yyParent=yMinExt+yDeltaExt*runif(numbPointsParent); #Simulate Poisson point process for the daughters (ie final poiint process) numbPointsDaughter=rpois(numbPointsParent,lambdaDaughter); numbPoints=sum(numbPointsDaughter); #total number of points #Generate the (relative) locations in Cartesian coordinates by #simulating independent normal variables xx0=rnorm(numbPoints,0,sigma); yy0=rnorm(numbPoints,0,sigma); #replicate parent points (ie centres of disks/clusters) xx=rep(xxParent,numbPointsDaughter); yy=rep(yyParent,numbPointsDaughter); #translate points (ie parents points are the centres of cluster disks) xx=xx+xx0; yy=yy+yy0; #thin points if outside the simulation window booleInside=((xx&amp;amp;amp;amp;amp;gt;=xMin)&amp;amp;amp;amp;amp;amp;(xx&amp;amp;amp;amp;amp;lt;=xMax)&amp;amp;amp;amp;amp;amp;(yy&amp;amp;amp;amp;amp;gt;=yMin)&amp;amp;amp;amp;amp;amp;(yy&amp;amp;amp;amp;amp;lt;=yMax)); #retain points inside simulation window xx=xx[booleInside]; yy=yy[booleInside]; #Plotting par(pty="s") plot(xx,yy,'p',xlab='x',ylab='y',col='blue'); 

Of course, as I have mentioned before, simulating a spatial point processes in R is even easier with the powerful spatial statistics library spatstat. The Thomas cluster point process is simulated by using the function rThomas, but other cluster point processes, including Neyman-Scott types, are possible.

Python

Note: in previous posts I used the SciPy functions for random number generation, but now use the NumPy ones, but there is little difference, as SciPy builds off NumPy.

 import numpy as np;&amp;amp;amp;amp;amp;nbsp; # NumPy package for arrays, random number generation, etc import matplotlib.pyplot as plt&amp;amp;amp;amp;amp;nbsp; # For plotting # Simulation window parameters xMin = -.5; xMax = .5; yMin = -.5; yMax = .5; # Parameters for the parent and daughter point processes lambdaParent = 10; # density of parent Poisson point process lambdaDaughter = 100; # mean number of points in each cluster sigma = 0.05; # sigma for normal variables (ie random locations) of daughters #Extended simulation windows parameters rExt=7*sigma; #extension parameter #for rExt, use factor of deviation sigma eg 6 or 7 xMinExt = xMin - rExt; xMaxExt = xMax + rExt; yMinExt = yMin - rExt; yMaxExt = yMax + rExt; # rectangle dimensions xDeltaExt = xMaxExt - xMinExt; yDeltaExt = yMaxExt - yMinExt; areaTotalExt = xDeltaExt * yDeltaExt; # area of extended rectangle # Simulate Poisson point process for the parents numbPointsParent = np.random.poisson(areaTotalExt * lambdaParent);# Poisson number of points # x and y coordinates of Poisson points for the parent xxParent = xMinExt + xDeltaExt * np.random.uniform(0, 1, numbPointsParent); yyParent = yMinExt + yDeltaExt * np.random.uniform(0, 1, numbPointsParent); # Simulate Poisson point process for the daughters (ie final poiint process) numbPointsDaughter = np.random.poisson(lambdaDaughter, numbPointsParent); numbPoints = sum(numbPointsDaughter); # total number of points # Generate the (relative) locations in Cartesian coordinates by # simulating independent normal variables xx0 = np.random.normal(0, sigma, numbPoints); # (relative) x coordinaets yy0 = np.random.normal(0, sigma, numbPoints); # (relative) y coordinates # replicate parent points (ie centres of disks/clusters) xx = np.repeat(xxParent, numbPointsDaughter); yy = np.repeat(yyParent, numbPointsDaughter); # translate points (ie parents points are the centres of cluster disks) xx = xx + xx0; yy = yy + yy0; # thin points if outside the simulation window booleInside=((xx&amp;amp;amp;amp;amp;gt;=xMin)&amp;amp;amp;amp;amp;amp;(xx&amp;amp;amp;amp;amp;lt;=xMax)&amp;amp;amp;amp;amp;amp;(yy&amp;amp;amp;amp;amp;gt;=yMin)&amp;amp;amp;amp;amp;amp;(yy&amp;amp;amp;amp;amp;lt;=yMax)); # retain points inside simulation window xx = xx[booleInside]; yy = yy[booleInside]; # Plotting plt.scatter(xx, yy, edgecolor='b', facecolor='none', alpha=0.5); plt.xlabel("x"); plt.ylabel("y"); plt.axis('equal'); 
Julia 

After writing this post, I later wrote the code in Julia. The code is here and my thoughts about Julia are here.

Results

The results show that the clusters of Thomas point process tend to be more blurred than those of Matérn point process, which has cluster edges clearly defined by the disks.  The points of of a Thomas point process can be far away from the centre of each cluster, depending on the variance of the normal random variables used in the simulation. 

MATLAB

R

Python

Further reading

Thomas (and Matérn) cluster point processes and more generally Neyman-Scott point processes are covered in standard books on the related fields of spatial statistics, point processes and stochastic geometry, such as the following: Spatial Point Patterns: Methodology and Applications with R by  Baddeley, Rubak and Turner, on page 461; Statistical Analysis and Modelling of Spatial Point Patterns Statistics by Illian, Penttinen, Stoyan, amd Stoyan, page 370 and Section 6.3.2; Stochastic Geometry and its Applications by Chiu, Stoyan, Kendall and Mecke,  on page 173; and; Statistical Inference and Simulation for Spatial Point Processes by Møller and Waagepetersen, in Section 5.3. I would probably recommend the first two books for beginners. 

The Thomas point process has also appeared in models of wireless networks, which we covered in the book Stochastic Geometry Analysis of Cellular Networks by Błaszczyszyn, Haenggi, Keeler, and Mukherjee, Section 8.1.8.

More generally, Neyman-Scott point processes belong to a family of point processes called shot noise Cox point processes; see the paper by Møller. 

I mentioned above the book Spatial Point Patterns: Methodology and Applications with R , which is written by spatial statistics experts Baddeley, Rubak and Turner. It covers the spatial statistics (and point process simulation) R-package spatstat.

In my travels on the web, I found this post where the writer also simulates the Thomas and Matérn point processes in Python, independent of my code.  That code is a bit different to mine because I use the repeat function and simulate all the Poisson variables at once, instead of using a for-loop and simulating a Poisson variable for each iteration.  I also think it’s not quite correct because I do not see how they account for edge effects.  

Simulating a Matérn cluster point process

A Matérn cluster point process is a type of cluster point process, meaning that its randomly located points tend to form random clusters. (I skip the details here, but by using techniques from spatial statistics, it is possible to make the definition of clustering more precise.) This point process is an example of a family of cluster point processes known as Neyman-Scott point processes, which have been used in spatial statistics and telecommunications.

I should point out that the Matérn cluster point process should not be confused with the Matérn hard-core point process, which is a completely different type of point process. (For a research article, I have actually written code in MATLAB that simulates this type of point process.) Bertril Matérn proposed at least four types of point processes, and his name also refers to a specific type of covariance function used to define Gaussian processes.

Overview

Simulating a Matérn cluster point process requires first simulating  a homogeneous Poisson point process with an intensity \(\lambda>0\) on some simulation window, such as a rectangle, which is the simulation window I will use here. Then for each point of this underlying point process, simulate a Poisson number of points with mean \(\mu>0\) uniformly on a disk with a constant radius \(r>0\). The underlying  point process is sometimes called the parent (point) process, and its points are centres of the cluster disks.

The subsequent point process on all the disks is called daughter (point) process and it forms the clusters. I have already written about simulating the homogeneous Poisson point processes on a rectangle and a disk, so those posts are good starting points, and I will not focus  too much on details for these steps.

Edge effects

The main challenge behind sampling this point process, which I originally forgot about in an earlier version of this post, is that it’s possible for daughter points to appear in the simulation window that come from parents points outside the simulation window. In other words, parents points outside the simulation window contribute to points inside the window.

To remove these edge effects, the point processes must be simulated on an extended version of the simulation window. Then only the daughter points within the simulation window are kept and the rest are removed.  Consequently, the points are simulated on an extended window, but we only see the points inside the simulation window.

To create the extended simulation window, we can add a strip of width \(r\) all around the simulation window. Why? Well, the distance \(r\) is the maximum distance from the simulation window that a possibly contributing parent point (outside the simulation window) can exist, while still having daughter points inside the simulation window. This means it is impossible for a hypothetical parent point beyond this distance (outside the extended window) to generate a daughter point that can fall inside the simulation window. 

Steps

Number of points

Simulate the underlying or parent Poisson point process on the rectangle with \(N_P\) points. Then for each point, simulate a Poisson number of points, where each disk now has \(D_i\) number of points. Then the total number of points is simply \(N=D_1+\dots +D_{P}=\sum_{i=1}^{N_P}D_i \). The random variables \(P\) and \(D_i\) are Poisson random variables with respective means \(\lambda A\) and \(\mu\), where \(A\) is the area of the rectangular simulation window. To simulate these random variables in MATLAB, use the poissrnd function. To do this in R, use the  standard function rpois. In Python, we can use either functions scipy.stats.poisson or numpy.random.poisson from the SciPy or NumPy libraries.

Locations of points

The points of the parent point process are randomly positioned by using Cartesian coordinates. For a homogeneous Poisson point process, the \(x\) and \(y\) coordinates of each point are independent uniform points, which is also the case for the binomial point process, covered in a previous post. The points of all the daughter point process are randomly positioned by using polar  coordinates. For a homogeneous Poisson point process, the \(\theta\) and \(\rho\) coordinates of each point are independent  variables,  respectively with uniform and triangle distributions, which was covered in a previous post. Then we  convert coordinates back to Cartesian form, which is easily done in MATLAB with the pol2cart function. In languages without such a function: \(x=\rho\cos(\theta)\) and \(y=\rho\sin(\theta)\).

Shifting all the points in each cluster disk

In practice (that is, in the code), all the daughter points are simulated in a disk with its centre at the origin. Then for each cluster disk, all the points have to be shifted to the origin is the center of the cluster, which completes the simulation step.

To use vectorization in the code,  the coordinates of each cluster point are repeated by the number of daughters in the corresponding cluster by using the functions repelem in MATLAB, rep in R, and repeat in Python. 

Code

I’ll now give some code in MATLAB, R and Python, which you can see are all very similar. It’s also located here.

MATLAB

The MATLAB code is located here.

R

The R code is located here.

Of course, as I have mentioned before, simulating a spatial point processes in R is even easier with the powerful spatial statistics library spatstat. The Matérn cluster point process is simulated by using the function rMatClust, but other cluster point processes, including Neyman-Scott types, are possible.

Python

The Pyhon code is located here.

Note: in previous posts I used the SciPy functions for random number generation, but now use the NumPy ones, but there is little difference, as SciPy builds off NumPy.

Julia 

After writing this post, I later wrote the code in Julia. The code is here and my thoughts about Julia are here.

Results

MATLAB

R

Python  

Further reading

Matérn cluster point processes and more generally Neyman-Scott point processes are covered in standard books on the related fields of spatial statistics, point processes and stochastic geometry, such as the following: Spatial Point Patterns: Methodology and Applications with R by  Baddeley, Rubak and Turner, on page 461; Statistical Analysis and Modelling of Spatial Point Patterns Statistics by Illian, Penttinen, Stoyan, amd Stoyan, Section 6.3.2, starting on page 370; Stochastic Geometry and its Applications by Chiu, Stoyan, Kendall and Mecke,  on page 173; and; Statistical Inference and Simulation for Spatial Point Processes by Moller and Waagepetersen, in Section 5.3. I would probably recommend the first two books for beginners.

The Matérn point process has also appeared in models of wireless networks, which we covered in the book Stochastic Geometry Analysis of Cellular Networks by Błaszczyszyn, Haenggi, Keeler, and Mukherjee, Section 8.1.8.

I mentioned above the book Spatial Point Patterns: Methodology and Applications with R , which is written by spatial statistics experts Baddeley, Rubak and Turner. It covers the spatial statistics (and point process simulation) R-package spatstat.

Determinantal thinning of point processes with network learning applications

My colleague and I uploaded a manuscript:

  • Błaszczyszyn and Keeler, Determinantal thinning of point processes with network learning applications.

https://arxiv.org/abs/1810.08672

Details

The paper uses a (relatively) new model framework in machine learning.  This framework is based on a special type of point process called a determinantal point process, which is also called a fermion point process. (This particle model draw inspiration from the form of the wave function in quantum mechanics.) Kulesza and Taskar introduced and developed the framework for using determinantal point processes for machine learning models.

Code

The MATLAB code for the producing the results in the paper can be found here:

https://github.com/hpaulkeeler/DetPoisson_MATLAB

I also re-wrote (or translated) the MATLAB code into Python:

https://github.com/hpaulkeeler/DetPoisson_Python

Simulating a Poisson point process on a triangle

The title gives it away. But yes, after  two posts about simulating a Poisson point process  on a rectangle and disk, the next shape is a triangle. Useful, right?

Well, I actually had to do this once for a part of something larger. You can divide  polygons, regular or irregular, into triangles, which is often called triangulation, and there is much code that does triangulation.  Using the independence property of the Poisson process, you can then simulate a Poisson point process on each triangle, and you end up with a Poisson point process on a polygon.

But the first step is to do it on a triangle. Consider a general triangle with its three corners labelled \(\textbf{A}\), \(\textbf{B}\) and \(\textbf{C}\). Again, simulating a Poisson point process comes down to the number of points and the locations of points.

Method

Number of points

The number of points of a homogeneous Poisson point process  in any shape with with area \(A\) is simply a Poisson random variable with mean  \(\lambda A\), where \(A\) is the area of the shape. For the triangle’s area, we just uses Herron’s formula, which says that a triangle with sides \(a\), \(b\), and \(c\)  has the area \(A=\sqrt{s(s-a)(s-b)(s-c)}\), where \(s=(a+b+c)/2\), which means you just need to use Pythagoras’ theorem for  the lengths \(a\), \(b\), and \(c\). Interestingly, this standard formula can be prone to numerical error if the triangle is very thin or needle-shaped. A more secure and stable expression is

\(A=\frac{1}{4}\sqrt{ (a+(b+c)) (c-(a-b)) (c+(a-b)) (a+(b-c)) },\)

where the brackets do matter; see Section 2.5 in the book by Higham.

Of course in MATLAB you can just use the function polyarea or the function with the same name in R.

Now just generate or simulate Poisson random variables with mean (or parameter)  \(\lambda A\). In MATLAB,  use the poissrnd function with the argument \(\lambda A\). In R, it is done similarly with the standard  function rpois . In Python, we can use either the scipy.stats.poisson or numpy.random.poisson function from the SciPy or NumPy libraries.

Locations of points

We need to position all the points randomly and uniformly on a triangle.  As with the previous two simulation cases, to position a single point \((x, y)\), you first need to produce two random uniform variables on the unit interval \((0,1)\), say \(U\) and \(V\). I’ll denote the \(x\) and \(y\) coordinates of point by \(x_{\textbf{A}}\) and \(y_{\textbf{A}}\), and similarly for the other two points.  To get the random \(x\) and \(y\) values, you use these two formulas:

\(x=\sqrt{U} x_{\textbf{A}}+\sqrt{U}(1-V x_{\textbf{B}})+\sqrt{U}V x_{\textbf{C}}\)

\(y=\sqrt{U} y_{\textbf{A}}+\sqrt{U}(1-V y_{\textbf{B}})+\sqrt{U}V y_{\textbf{C}}\)

Done. A Poisson point process simulated on a triangle .

Code

I now present some code in MATLAB, R and Python, which you can see are all very similar.  To avoid using a for-loop and employing instead MATLAB’s inbuilt vectorization, I use the dot notation for the product \(\sqrt{U}V\). In R and Python (using SciPy), that’s done automatically.

MATLAB


%Simulation window parameters -- points A,B,C of a triangle
xA=0;xB=1;xC=1; %x values of three points
yA=0;yB=0;yC=1; %y values of three points

%Point process parameters
lambda=100; %intensity (ie mean density) of the Poisson process

%calculate sides of trinagle
a=sqrt((xA-xB)^2+(yA-yB)^2);
b=sqrt((xB-xC)^2+(yB-yC)^2);
c=sqrt((xC-xA)^2+(yC-yA)^2);
s=(a+b+c)/2; %calculate semi-perimeter

%Use Herron's forumula -- or use polyarea
areaTotal=sqrt(s*(s-a)*(s-b)*(s-c)); %area of triangle

%Simulate Poisson point process
numbPoints=poissrnd(areaTotal*lambda);%Poisson number of points
U=(rand(numbPoints,1));%uniform random variables
V=(rand(numbPoints,1));%uniform random variables

xx=sqrt(U)*xA+sqrt(U).*(1-V)*xB+sqrt(U).*V*xC;%x coordinates of points
yy=sqrt(U)*yA+sqrt(U).*(1-V)*yB+sqrt(U).*V*yC;%y coordinates of points

%Plotting
scatter(xx,yy);
xlabel('x');ylabel('y');

R

Note: it is a bit tricky to write “<-” in the R code (as it automatically changes to the html equivalent in the HTML editor I am using), so I have usually used “=” instead of the usual “<-”.


#Simulation window parameters -- points A,B,C of a triangle
xA=0;xB=1;xC=1; #x values of three points
yA=0;yB=0;yC=1; #y values of three points

#Point process parameters
lambda=100; #intensity (ie mean density) of the Poisson process

#calculate sides of trinagle
a=sqrt((xA-xB)^2+(yA-yB)^2);
b=sqrt((xB-xC)^2+(yB-yC)^2);
c=sqrt((xC-xA)^2+(yC-yA)^2);
s=(a+b+c)/2; #calculate semi-perimeter

#Use Herron's forumula to calculate area
areaTotal=sqrt(s*(s-a)*(s-b)*(s-c)); #area of triangle

#Simulate a Poisson point process
numbPoints=rpois(1,areaTotal*lambda);#Poisson number of points
U=runif(numbPoints);#uniform random variables
V=runif(numbPoints);#uniform random variables

xx=sqrt(U)*xA+sqrt(U)*(1-V)*xB+sqrt(U)*V*xC;#x coordinates of points
yy=sqrt(U)*yA+sqrt(U)*(1-V)*yB+sqrt(U)*V*yC;#y coordinates of points

#Plotting
plot(xx,yy,'p',xlab='x',ylab='y',col='blue');

Simulating a Poisson point process in R is even easier, with the amazing spatial statistics library spatstat. You just need to define the triangular window.

#Simulation window parameters -- points A,B,C of a triangle
xA=0;xB=1;xC=1; #x values of three points
yA=0;yB=0;yC=1; #y values of three points

#Point process parameters
lambda=100; #intensity (ie mean density) of the Poisson process

library("spatstat");
#create triangle window object
windowTriangle=owin(poly=list(x=c(xA,xB,xC), y=c(yA,yB,yC)));
#create Poisson "point pattern" object
ppPoisson=rpoispp(lambda,win=windowTriangle)
plot(ppPoisson); #Plot point pattern object
#retrieve x/y values from point pattern object
xx=ppPoisson$x;
yy=ppPoisson$y;

Python

Note: “lambda” is a reserved word in Python (and other languages), so I have used “lambda0” instead.

#import libraries
import numpy as np
import scipy.stats
import matplotlib.pyplot as plt

#Simulation window parameters -- points A,B,C of a triangle
xA=0;xB=1;xC=1; #x values of three points
yA=0;yB=0;yC=1; #y values of three points

#Point process parameters
lambda0=100; #intensity (ie mean density) of the Poisson process

#calculate sides of trinagle
a=np.sqrt((xA-xB)**2+(yA-yB)**2);
b=np.sqrt((xB-xC)**2+(yB-yC)**2);
c=np.sqrt((xC-xA)**2+(yC-yA)**2);
s=(a+b+c)/2; #calculate semi-perimeter

#Use Herron's forumula to calculate area -- or use polyarea
areaTotal=np.sqrt(s*(s-a)*(s-b)*(s-c)); #area of triangle

#Simulate a Poisson point process
numbPoints = scipy.stats.poisson(lambda0*areaTotal).rvs();#Poisson number of points
U = scipy.stats.uniform.rvs(0,1,((numbPoints,1)));#uniform random variables
V= scipy.stats.uniform.rvs(0,1,((numbPoints,1)));#uniform random variables

xx=np.sqrt(U)*xA+np.sqrt(U)*(1-V)*xB+np.sqrt(U)*V*xC;#x coordinates of points
yy=np.sqrt(U)*yA+np.sqrt(U)*(1-V)*yB+np.sqrt(U)*V*yC;#y coordinates of points

#Plotting
plt.scatter(xx,yy, edgecolor='b', facecolor='none', alpha=0.5 );
plt.xlabel("x"); plt.ylabel("y");

Results

MATLAB

R

Python

Further reading

The topic of placing a single point uniformly on a general triangle is discussed in this StackExchange post.  For the formulas, it cites the paper “Shape distributions” by Osada, Funkhouser, Chazelle and Dobkin”, where no proof is given.

I originally looked at placing single points in cells of a Dirichlet or Voronoi tesselation — terms vary. There is a lot of literature on this topic, particularly when the seeds of the cells form a Poisson point process. The references in the articles on Wikipedia and MathWorld are good starting points.

Correction

In a previous version of this blog, there was an error in the two Cartesian formula for randomly simulating a point in a triangle. This has been fixed, but the error never existed in the code.

Simulating a Poisson point process on a disk

Sometimes one needs to simulate a Poisson point process on a disk. I know I often do. A disk or disc, depending on your spelling preference, is isotropic or rotationally-invariant, so a lot of my simulations of a Poisson point process happen in a circular simulation window when I am considering such a setting. For example, maybe you want to consider a single wireless receiver in a Poisson network of wireless transmitters, which only cares about the distance to a transmitter. Alternatively, maybe you want to randomly sprinkle a virtual cake. What to do? A Poisson point process on a disk is the answer.

In this post I will simulate a Poisson point process with intensity \(\lambda>0\) on a disk with radius \(r>0\).

Steps

The simulation steps are very similar to those in the previous post where I simulated a  homogeneous Poisson point process on a rectangle, and I suggest going back to that post if you are not familiar with the material. The main difference between simulation on a rectangle and a disk is the positioning of the points, but first we’ll look at the number of points.

Number of points

The number of points of a Poisson point process falling within a circle of radius \(r>0\) is a Poisson random variable with mean  \(\lambda A\), where \(A=\pi r^2\) is the area of the disk. As in the rectangular case, this is the most complicated part of the simulation procedure. But as long as your preferred programming language can produce (pseudo-)random numbers according to a Poisson distribution, you can simulate a homogeneous Poisson point process on a disk.

To do this in MATLAB,  use the poissrnd function with the argument \(\lambda A\). In R, it is done similarly with the standard  function rpois . In Python, we can use either the scipy.stats.poisson or numpy.random.poisson function from the SciPy or NumPy libraries.

Locations of points

We need to position all the points randomly and uniformly on a disk. In the case of the rectangle, we worked in Cartesian coordinates. It is then natural that we now work in polar coordinates.  I’ll denote the angular and radial coordinate respectively by \(\theta\) and \(\rho\). To generate the random angular (or \(\theta\)) values, we simply produce uniform random variables between zero and one, which is what all standard (pseudo-)random number generators do in programming languages. But we then multiply all these numbers by \(2\pi\), meaning that all the numbers now fall between \(0\) and \(2\pi\).

To generate the random radial (or \(\rho\)) values, a reasonable guess would be to do the same as before and generate uniform random variables between zero and one, and then multiply them by the disk radius \(r\). But that would be wrong.

Before multiplying uniform random variables by the radius, we must take the square root of all the random numbers. We then multiply them by the radius, generating random variables between \(0\) and \(r\). (We must take the square root because the area element of a sector or disk is proportional to the radius squared, and not the radius.) These random numbers do not have a uniform distribution, due to the square root, but in fact their distribution is an example of the triangular distribution, which is defined with three real-valued parameters \(a\), \(b\) and \(c\), and for our case, set \(a=0\) and \(b=c=r\).

In summary, if \(U\) and \(V\) are two independent uniform random variables on \((0,1)\), then random point located uniformly on a disk of radius \(r\) has the polar coordinates \((r\sqrt(U), 2\pi V)\).

From polar to Cartesian coordinates

That’s it. We have generated polar coordinates for points randomly and uniformly located in a disk. But to plot the points, often we have to convert coordinates back to Cartesian form. This is easily done in MATLAB with the pol2cart function. In languages without such a function, trigonometry comes to the rescue: \(x=\rho\cos(\theta)\) and \(y=\rho\sin(\theta)\).

Equal x and y axes

Sometimes the plotted points more resemble points on an ellipse than a disk due to the different scaling of the x and y axes. To fix this in MATLAB, run the command: axis square. In Python, set axis(‘equal’) in your plot; see this page for a demonstration.

Code

I’ll now give some code in MATLAB, R and Python, which, as you can see, are all very similar

MATLAB
%Simulation window parameters
r=1; %radius of disk
xx0=0; yy0=0; %centre of disk

areaTotal=pi*r^2; %area of disk
 
%Point process parameters
lambda=100; %intensity (ie mean density) of the Poisson process
 
%Simulate Poisson point process
numbPoints=poissrnd(areaTotal*lambda);%Poisson number of points
theta=2*pi*(rand(numbPoints,1)); %angular coordinates
rho=r*sqrt(rand(numbPoints,1)); %radial coordinates

%Convert from polar to Cartesian coordinates
[xx,yy]=pol2cart(theta,rho); %x/y coordinates of Poisson points

%Shift centre of disk to (xx0,yy0)
xx=xx+xx0;
yy=yy+yy0;
 
%Plotting
scatter(xx,yy);
xlabel('x');ylabel('y');
axis square;
R

Note: it is a bit tricky to write “<-” in the R code (as it automatically changes to the html equivalent in the HTML editor I am using), so I have usually used “=” instead of the usual “<-”.

#Simulation window parameters
r=1; #radius of disk
xx0=0; yy0=0; #centre of disk

areaTotal=pi*r^2; #area of disk

#Point process parameters
lambda=100; #intensity (ie mean density) of the Poisson process

#Simulate Poisson point process
numbPoints=rpois(1,areaTotal*lambda);#Poisson number of points
theta=2*pi*runif(numbPoints);#angular  of Poisson points
rho=r*sqrt(runif(numbPoints));#radial coordinates of Poisson points

#Convert from polar to Cartesian coordinates
xx=rho*cos(theta);
yy=rho*sin(theta);

#Shift centre of disk to (xx0,yy0)
xx=xx+xx0;
yy=yy+yy0;

#Plotting
par(pty="s")
plot(xx,yy,'p',xlab='x',ylab='y',col='blue');

Of course, with the amazing spatial statistics library spatstat, simulating a Poisson point process in R is even easier.

library("spatstat"); #load spatial statistics library
#create Poisson "point pattern" object
ppPoisson=rpoispp(lambda,win=disc(radius=r,centre=c(xx0,yy0))) 
plot(ppPoisson); #Plot point pattern object
#retrieve x/y values from point pattern object
xx=ppPoisson$x; 
yy=ppPoisson$y;

Actually, you can even do it all in two lines: one for loading the spatstat library and one for creating and plotting the point pattern object.

Python

Note: “lambda” is a reserved word in Python (and other languages), so I have used “lambda0” instead.

import numpy as np; #NumPy package for arrays, random number generation, etc
import matplotlib.pyplot as plt #for plotting

#Simulation window parameters
r=1;  #radius of disk
xx0=0; yy0=0; #centre of disk
areaTotal=np.pi*r**2; #area of disk

#Point process parameters
lambda0=100; #intensity (ie mean density) of the Poisson process

#Simulate Poisson point process
numbPoints = np.random.poisson(lambda0*areaTotal);#Poisson number of points
theta=2*np.pi*np.random.uniform(0,1,numbPoints); #angular coordinates 
rho=r*np.sqrt(np.random.uniform(0,1,numbPoints)); #radial coordinates 

#Convert from polar to Cartesian coordinates
xx = rho * np.cos(theta);
yy = rho * np.sin(theta);

#Shift centre of disk to (xx0,yy0) 
xx=xx+xx0; yy=yy+yy0;

#Plotting
plt.scatter(xx,yy, edgecolor='b', facecolor='none', alpha=0.5 );
plt.xlabel("x"); plt.ylabel("y");
plt.axis('equal');
Julia

After writing this post, I later wrote the code in Julia. The code is here and my thoughts about Julia are here.

Results

MATLAB
R 
Python  

Further reading

The third edition of the classic book Stochastic Geometry and its Applications by Chiu, Stoyan, Kendall and Mecke details on page 54 how to uniformly place points on a disk, which they call the radial way. The same simulation section appears in the previous edition by Stoyan, Kendall and Mecke (Chiu didn’t appear as an author until the current edition), though these books in general have little material on simulation methods. There is the book Spatial Point Patterns: Methodology and Applications with R written by spatial statistics experts  Baddeley, Rubak and Turner, which covers the spatial statistics (and point process simulation) R-package spatstat.