The Metropolis-Hastings algorithm in C

There’s an old saying, which I just made up. If you really want to understand something, code it up in C.1Taking it to the next level, I recall an undergraduate subject on programming Motorola chips using assembly. But I wouldn’t recommend any of that.

I wrote some C code that basically does what my code in this previous post does without the pretty pictures. The code performs a Metropolis-Hastings algorithm, simulating random variables (or, more correctly, generating random variates) for a joint probability density describing two random variables.

In previous posts, I have covered the topic of Markov chain Monte Carlo (MCMC) methods, particularly the central workhorse, the Metropolis(-Rosenbluth-Rosenbluth-Teller-Teller)-Hastings algorithm.  These methods are frequently used in Bayesian statistics, high-dimensional integration, and optimization. For more details on how they work, I have written a couple posts, starting with this one and ending with this one, where I detail the mechanics of MCMC methods.

Let’s call this code MCMC C code2 Wise man once wrote: C:/DOS C:/DOS/RUN RUN/DOS/RUN.

Code considerations

The C programming language was not written for playing with random numbers. The standard uniform random number generator in C, simply called rand,  is not good for research level random simulations. But it does the job for illustration purposes.

In lieu of this generator, the Mersenne Twister is a popular algorithm for producing such numbers, which is widely recommended and used. There are implementations of this algorithm in the CUDA and MKL libraries for Nvidia GPUs and Intel CPUs; see this CUDA page and this MKL page for details. Check out this PDF file for further details on the CUDA version.

In addition to that, my C code needs to simulate normal (or Gaussian) random variables. For that I wrote my own simple code using the Box-Muller transform, which I covered in a previous post, so the code would be self-contained and less opaque. But in reality, you should always use functions from a pre-written library for generating variates according to a normal or whichever distribution.

Finally, C was never intended as a scientific language, despite its wide use as one. (Historically, that was Fortran’s job, which is still the workhorse for many serious number-crunching institutes, hence why there’s Fortran -ready version of CUDA.) So when handling sets of numbers, such as vectors and matrices, one has to use pointers and malloc more often that not, which can be a tricky. This is the case here, though the use of pointers in this code is relatively simple.

Where are the pretty pictures?

Unfortunately, when number crunching in C, you don’t immediately have access to plotting libraries that are available in scientific programming languages such as Python and Julia.

But you can simply create .csv (or text) files, and then plot them using whichever library you prefer. And if you have gnuplot installed, you can perform simple one-dimensional histograms using one-line commands such as:

gnuplot -e “plot ‘MCMCData_1D.csv’ using 1 bins=20;” -persist

In the above command, the file MCMCData_1D.csv has the random variates (the simulated random variables) stored in a single column.

Code

I only present code for the more complicated two-dimensional case. The code can be found here.

Note that there’s an option in the code of MCMC_1D.c to plot the results using gnuplot, if it’s installed on your machine. I didn’t include code for plotting the results of the 2-D case as gnuplot doesn’t do a 2-D histogram.

Warning: The code uses the standard pseudo-random number generator in C, which is known for being bad. I only used built-in the C generator to keep my code self-contained. For that reason, I also wrote my code for generating Gaussian (or normal) random variables, by using the Box-Muller transform, but in reality one would never do that for research or industry purposes.

/***********************************************************************
 * Runs a simple Metropolis-Hastings (ie MCMC) algorithm to simulate two
 * jointly distributed random variables with probability density
 * p(x,y)=exp(-(x^4+x*y+y^2)/s^2)/consNorm, where s>0 and consNorm is a
 * normalization constant. The probability density function is defined in
 * the function pdf_single.
 *
 * NOTE: In practice, the value of the normalization constant is not needed, as it cancels out in the algorithm.
 *
 * NOTE: This code will *create* a local file (see variable strFilename) to store results. It will *overwrite* that file if it already exists.
 *
 * WARNING: This code uses the default C random number generator, which is known for failing various tests of randomness.
 * Strongly recommended to use another generator for purposes beyond simple illustration.
 *
 * Author: H. Paul Keeler, 2024.
 * Website: hpaulkeeler.com
 * Repository: github.com/hpaulkeeler/posts
 *
 ***********************************************************/

#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <math.h>
#include <stdbool.h>
#include <string.h>

const long double pi = 3.14159265358979323846; // constant pi for generating polar coordinates

// helper function declarations; see below for definitions
static double *unirand(double *randValues, unsigned numbRand);                           // generate  uniform random variables on (0,1)
static double *normrand(double *randValues, unsigned numbRand, double mu, double sigma); // generate normal random variables
static double pdf_single(double x, double y, double s);                      // define probability density to be simulated
static double mean_var(double *set_sample, unsigned numbSim, double *varX);              // calculate meana and variance

int main(int argc, char *argv[])
{

    if (argc > 1)
    {
        fprintf(stderr, "This program takes no arguments...\n");
        exit(1);
    }
    else
    {

        char strFilename[] = "MCMCData_2D.csv"; // filename for storing simulated random variates

        // intializes (pseudo)-random number generator
        time_t timeCPU; // use CPU time for seed
        srand((unsigned)time(&timeCPU));
        // srand(42); //to reproduce results

        bool booleWriteData = true; // write data to file
        bool booleStats = true;     // perform simple mean/std stats

        // parameters
        unsigned numbSim = 1e4;   // number of random variables simulated
        unsigned numbSteps = 200; // number of steps for the Markov process
        double sigma = 2;         // standard deviation for normal random steps

        // probability density parameters
        double s = .5; // scale parameter for distribution to be simulated

        // Metropolis-Hastings variables
        // proposal for a new position in the random walk
        double zxRandProposal;      
        double zyRandProposal;      
        double pdfProposal; // density for proposed position
        double pdfCurrent;  // density of current position
        double ratioAccept; // ratio of densities (ie acceptance probability)
        double uRand;       // uniform variable for Bernoulli trial (ie a coin flip)
        // random step (normally distributed)
        double *p_numbNormX = (double *)malloc(1 * sizeof(double));
        double *p_numbNormY = (double *)malloc(1 * sizeof(double));
//positions of the random walk (ie the simualted random variables after numbSteps)
        double *p_xRand = (double *)malloc(numbSim * sizeof(double));
        double *p_yRand = (double *)malloc(numbSim * sizeof(double));

        (void)unirand(p_xRand, numbSim); // random initial values
        (void)unirand(p_yRand, numbSim); // random initial values

        unsigned i, j; // loop varibales
        for (i = 0; i < numbSim; i++)
        {
            // loop through each random walk instance (or random variable to be simulated)

            pdfCurrent = pdf_single(*(p_xRand + i), *(p_yRand + i), s); // current probability density

            for (j = 0; j < numbSteps; j++)
            {
                // loop through each step of the random walk
                (void)normrand(p_numbNormX, 1, 0, sigma);
                (void)normrand(p_numbNormY, 1, 0, sigma);
                // take a(normally distributed) random step in x and y
                zxRandProposal = (*(p_xRand + i)) + (*p_numbNormX);
                zyRandProposal = (*(p_yRand + i)) + (*p_numbNormY);

                pdfProposal = pdf_single(zxRandProposal, zyRandProposal, s); // proposed probability density

                // acceptance rejection step
                (void)unirand(&uRand, 1);
                ratioAccept = pdfProposal / pdfCurrent;
                if (uRand < ratioAccept)
                {
                    // update state of random walk / Markov chain
                    *(p_xRand + i) = zxRandProposal;
                    *(p_yRand + i) = zyRandProposal;
                    pdfCurrent = pdfProposal;
                }
            }
        }

        free(p_numbNormX);
        free(p_numbNormY);

        if (booleStats)
        {

            // initialize statistics variables (for testing results)
            char strVariable[] = "XY";
            double *p_AllRand[] = {p_xRand, p_yRand};
            double meanTemp = 0;
            double varTemp = 0;
            double stdTemp = 0;
            char strTemp='X';
            for (i = 0; i < 2; i++)
            {
                meanTemp = mean_var(p_AllRand[i], numbSim, &varTemp);
                stdTemp = sqrt(varTemp);
                strTemp=strVariable[i];
                printf("The average of the %c random variables is %lf.\n", strTemp, meanTemp);
                printf("The standard deviation of the %c random  variables is %lf.\n", strTemp, stdTemp);
            }
        }

        if (booleWriteData)
        {
            // print to file
            FILE *outputFile;
            outputFile = fopen(strFilename, "w");
            for (i = 0; i < numbSim; i++)
            {
                fprintf(outputFile, "%lf,%lf\n", *(p_xRand + i), *(p_yRand + i)); // output to file
            }
            fclose(outputFile);
            printf("Data printed to file.\n");
        }
        free(p_xRand);
        free(p_yRand);

        return (0);
    }
}

static double pdf_single(double x, double y, double s)
{
    // returns the probability density of a single point (x,y) inside a simulation window defined below
    double pdf_output;

    // non-zero density window parameters
    double xMin = -1;
    double xMax = 1;
    double yMin = -1;
    double yMax = 1;

    if1
    {
        pdf_output = exp(-((pow(x, 4) + x * y + pow(y, 2)) / (s * s)));
    }
    else
    {
        pdf_output = 0;
    }
    return pdf_output;
}

static double *normrand(double *randValues, unsigned numbRand, double mu, double sigma)
{
    // simulate pairs of iid normal variables using Box-Muller transform
    // https://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform

    double U1, U2, thetaTemp, rhoTemp, Z1, Z2;
    int i = 0;
    while (i < numbRand)
    {
        // simulate variables in polar coordinates (theta, rho)
        (void)unirand(&U1, 1);
        thetaTemp = 2 * pi * U1; // create uniform theta values
        (void)unirand(&U2, 1);
        rhoTemp = sqrt(-2 * log(U2)); // create Rayleigh rho values

        // change to Cartesian coordinates
        Z1 = rhoTemp * cos(thetaTemp);
        Z1 = sigma * Z1 + mu;
        randValues[i] = Z1; // assign first of random variable pair
        i++;
        if (i < numbRand)
        {
            // if more variables are needed, generate second value of random pair
            Z2 = rhoTemp * sin(thetaTemp);
            Z2 = sigma * Z2 + mu;
            randValues[i] = Z2; // assign second of random variable pair
            i++;
        }
        else
        {
            break;
        }
    }
    return randValues;
}

static double *unirand(double *randValues, unsigned numbRand)
{ // simulate numbRand uniform random variables on the unit interval
  // storing them in randValues which must be allocated by the caller
  // with enough space for numbRand doubles

    for (int i = 0; i < numbRand; i++)
    {
        randValues[i] = (double)rand() / RAND_MAX;
    }
    return randValues;
}

static double mean_var(double *set_sample, unsigned numbSim, double *varX)
{
    // mean and variance of set_sample
    int i;
    // initialize statistics variables (for testing results)
    double meanX = 0;
    double meanXSquared = 0;
    double tempX;
    for (i = 0; i < numbSim; i++)
    {
        tempX = *(set_sample + i);
        meanX += tempX / ((double)numbSim);
        meanXSquared += tempX * tempX / ((double)numbSim);
    }

    *varX = meanXSquared - meanX * meanX;
    return meanX;
}

Further reading

Here’s some information on the rand function and random number generation:

  • https://codeforces.com/blog/entry/61587
  • https://c-faq.com/lib/randrange.html
  • https://www.codeproject.com/KB/recipes/pitfalls_random_number.aspx

Acknowledg(e)ments

A hat tip to C and CUDA guru Alex Stivala who pointed out a couple of minor issues in my original C code.

  1. x >= xMin) && (x <= xMax) && (y >= yMin) && (y <= yMax []

Checking Poisson point process simulations

In previous posts I described how to simulate homogeneous Poisson point processes on a rectangle, disk and triangle. Then I covered how to randomly thin a point process in a spatially dependent manner. Building off these posts, I wrote in my last post how to simulate an inhomogeneous or nonhomogeneous Poisson point process. Now I’ll describe how to verify that the simulation code correctly simulates the desired Poisson point process.

Although this post is focused on the Poisson point process, I stress that parts of the material hold for all point processes. Also, not surprisingly, some of the material and the code overlaps with that presented in the post on the inhomogeneous point process.

Basics

Any Poisson point process is defined with a measure called the intensity measure or mean measure, which I’ll denote by \(\Lambda\). For practical purposes, I assume that the intensity measure \(\Lambda\) has a derivative \(\lambda(x,y)\), where \(x\) and \(y\) denote the Cartesian coordinates. The function \(\lambda(x,y)\) is often called the intensity function or just intensity. I assume this function is bounded, so \(\lambda(x,y)<\infty\) for all points in a simulation window \(W\). Finally, I assume that the simulation window \(W\) is a rectangle.

Several times before I have mentioned that simulating a Poisson point process requires simulating two random components: the number of points and the locations of points. Working backwards, to check a Poisson simulation, we must run the Poisson simulation a large number of times (say \(10^3\) or \(10^4\)), and collect the statistics on these two properties. We’ll start by examining the easiest of the two random components.

Number of points

For any Poisson point process, the number of points is a Poisson random variable with a parameter (that is, a mean) \(\Lambda(W)\). Under our previous assumptions, this is given by the surface integral

$$\Lambda(W)=\int_W \lambda(x,y)dxdy.$$

Presumably we can evaluate such an integral analytically or numerically in order to simulate the Poisson point process. To check that we correctly simulate the random the number of points, we just need to simulate the point process a large number of times, and compare the statistics to those given by the analytic expressions.

Moments

The definition of the intensity measure of a point process is a measure that gives the average or expected number of points in some region. As the number of simulations increases, the (sample) average number of points will converge to the intensity measure \(\Lambda(W)\). I should stress that this is a test for the intensity measure, a type of first moment, which will work for the intensity measure of any point process.

For Poisson point processes, there is another moment test that can be done. It can be shown mathematically that the variance of the number of points will also converge to the intensity measure \(\Lambda(W)\), giving a second empirical test based on second moments. There is no point process theory here, as this moment result is simply due to the number of points being distributed according to a Poisson distribution. The second moment is very good for checking Poissonness, forming the basis for statistical tests. If this and the first moment test hold, then there’s a very strong chance the number of points is a Poisson variable.

Empirical distribution

Beyond the first two moments, an even more thorough test is to estimate an empirical distribution for the number of points. That is, we perform a histogram on the number of points, and then we normalize it, so the total mass sums to one, producing an empirical probability distribution for the number of points.

The results should closely match the probability mass function of a Poisson distribution with parameter \(\Lambda(W)\). This is simply

$$ P(N=n)= \frac{[\Lambda(W)]^n}{n!} e^{-\Lambda(W)}, $$

where \(N\) is the random number of points in the window \(W\), which has the area \(|W|\). If the empirical distribution is close to results given by the above expression, then we can confidently say that the number of points is a Poisson random variable with the right parameter or mean.

Locations of points

To check the positioning of points, we can empirically estimate the intensity function \(\lambda(x,y)\). A simple way to do this is to perform a two-dimensional histogram on the point locations. This is very similar to the one-dimensional histogram, which I suggested to do for testing the number of points. It just involves counting the number of points that fall into two-dimensional non-overlapping subsets called bins.

To estimate the intensity function, each bin count needs to be rescaled by the area of the bin, giving a density value for each bin. This empirical estimate of the intensity function should resemble the true intensity function \(\lambda(x,y)\). For a visual comparison, we can use a surface plot to illustrate the two sets of results.

This procedure will work for estimating the intensity function of any point process, not just a Poisson one.

Advanced tests

In spatial statistics there are more advanced statistical tests for testing how Poisson a point pattern is. But these tests are arguably too complicated for checking a simple point process that is rather easy to simulate. Furthermore, researchers usually apply these tests to a small number of point patterns. In this setting, it is not possible to accurately obtain empirical distributions without further assumptions. But with simulations, we can generate many simulations and obtain good empirical distributions. In short, I would not use such tests for just checking that I have properly coded a Poisson simulation.

Results

I produced the results with ten thousand simulations, which gave good results and took only a few seconds to complete on a standard desktop computer. Clearly increasing the number of simulations increases the accuracy of the statistics, but it also increases the computation time.

For the results, I used the intensity function

$$\lambda(x,y)=\lambda_1(x,y)+\lambda_2(x,y),$$

where

$$\lambda_1(x,y)=80e^{-((x+0.5)^2+(y+0.5)^2)/s^2},$$

$$\lambda_2(x,y)=100e^{-((x-0.5)^2+(y-0.5)^2)/s^2},$$

and \(s>0\) is a scale parameter. We can see that this function has two maxima or peaks at \((-0.5,-0.5)\) and \((0.5,0.5)\).

MATLAB

Python

Further reading

I have not covered much new theoretical stuff in this post, so looking at the references in previous posts, such as this one, should help.

For two-dimensional histograms, I recommend going to the respective MATLAB and Python function websites. Here’s an example of a two-dimensional histogram implemented in Python.

Probably the most difficult part for me was performing the plotting in Python. I recommend these links:

https://plot.ly/python/3d-surface-plots/

https://chrisalbon.com/python/basics/set_the_color_of_a_matplotlib/

https://matplotlib.org/examples/color/colormaps_reference.html

Code

All code from my posts, as always, can be found on the my GitHub repository. The code for this post is located here.

The estimating the statistical moments is standard. Performing the histograms is also routine, but when normalizing, you have to choose the option that returns the empirical estimate of the probability density function (pdf).

Fortunately, scientific programming languages usually have functions for performing two-dimensional histograms. What is a bit tricky is how to normalize or rescale the bin counts. The histogram functions can, for example, divide by the number of simulations, the area of each bin, or both. In the end, I chose the pdf option in both MATLAB and Python to give an empirical estimate of the probability density function, and then multiplied it by the average number of points, which was calculated in the previous check. (Although, I could have done this in a single step in MATLAB, but not in Python, so I chose to do it in a couple of steps in both languages so the code matches more closely.)

MATLAB

I used the surf function to plot the intensity function and its estimate; see below for details on the histograms.

close all;

%Simulation window parameters
xMin=-1;xMax=1;
yMin=-1;yMax=1;
xDelta=xMax-xMin;yDelta=yMax-yMin; %rectangle dimensions
areaTotal=xDelta*yDelta; %area of rectangle

numbSim=10^4; %number of simulations

s=0.5; %scale parameter

%Point process parameters
fun_lambda=@(x,y)(100*exp(-((x).^2+(y).^2)/s^2));%intensity function

%%%START -- find maximum lambda -- START %%%
%For an intensity function lambda, given by function fun_lambda,
%finds the maximum of lambda in a rectangular region given by
%[xMin,xMax,yMin,yMax].
funNeg=@(x)(-fun_lambda(x(1),x(2))); %negative of lambda
%initial value(ie centre)
xy0=[(xMin+xMax)/2,(yMin+yMax)/2];%initial value(ie centre)
%Set up optimization step
options=optimoptions('fmincon','Display','off');
%Find largest lambda value
[~,lambdaNegMin]=fmincon(funNeg,xy0,[],[],[],[],...
[xMin,yMin],[xMax,yMax],'',options);
lambdaMax=-lambdaNegMin;
%%%END -- find maximum lambda -- END%%%

%define thinning probability function
fun_p=@(x,y)(fun_lambda(x,y)/lambdaMax);

%for collecting statistics -- set numbSim=1 for one simulation
numbPointsRetained=zeros(numbSim,1); %vector to record number of points
for ii=1:numbSim
%Simulate Poisson point process
numbPoints=poissrnd(areaTotal*lambdaMax);%Poisson number of points
xx=xDelta*(rand(numbPoints,1))+xMin;%x coordinates of Poisson points
yy=xDelta*(rand(numbPoints,1))+yMin;%y coordinates of Poisson points

%calculate spatially-dependent thinning probabilities
p=fun_p(xx,yy);

%Generate Bernoulli variables (ie coin flips) for thinning
booleRetained=rand(numbPoints,1)<p; %points to be retained %x/y locations of retained points xxRetained=xx(booleRetained); yyRetained=yy(booleRetained); %collect number of points simulated numbPointsRetained(ii)=length(xxRetained); end %Plotting plot(xxRetained,yyRetained,'bo'); %plot retained points xlabel('x');ylabel('y'); %run empirical test on number of points generated if numbSim&gt;=10
%total mean measure (average number of points)
LambdaNumerical=integral2(fun_lambda,xMin,xMax,yMin,yMax)
%Test: as numbSim increases, numbPointsMean converges to LambdaNumerical
numbPointsMean=mean(numbPointsRetained)
%Test: as numbSim increases, numbPointsVar converges to LambdaNumerical
numbPointsVar=var(numbPointsRetained)

end

For the histogram section, I used the histcounts and histcounts2 functions respectively to estimate the distribution of the number of points and the intensity function. I used the pdf option.

Number of points

histcounts(numbPointsRetained,binEdges,'Normalization','pdf');

Locations of points

histcounts2(xxVectorAll,yyVectorAll,numbBins,'Normalization','pdf');
Python

I used the Matplotlib library to plot the intensity function and its estimate; see below for details on the histograms.

import numpy as np; #NumPy package for arrays, random number generation, etc
import matplotlib.pyplot as plt #for plotting
from matplotlib import cm #for heatmap plotting
from mpl_toolkits import mplot3d #for 3-D plots
from scipy.optimize import minimize #for optimizing
from scipy import integrate #for integrating
from scipy.stats import poisson #for the Poisson probability mass function

plt.close("all"); #close all plots

#Simulation window parameters
xMin=-1;xMax=1;
yMin=-1;yMax=1;
xDelta=xMax-xMin;yDelta=yMax-yMin; #rectangle dimensions
areaTotal=xDelta*yDelta;

numbSim=10**4; #number of simulations
numbBins=30; #number of bins for histogram

#Point process parameters
s=0.5; #scale parameter

def fun_lambda(x,y):
#intensity function
lambdaValue=80*np.exp(-((x+0.5)**2+(y+0.5)**2)/s**2)+100*np.exp(-((x-0.5)**2+(y-0.5)**2)/s**2);
return lambdaValue;

###START -- find maximum lambda -- START ###
#For an intensity function lambda, given by function fun_lambda,
#finds the maximum of lambda in a rectangular region given by
#[xMin,xMax,yMin,yMax].
def fun_Neg(x):
return -fun_lambda(x[0],x[1]); #negative of lambda

xy0=[(xMin+xMax)/2,(yMin+yMax)/2];#initial value(ie centre)
#Find largest lambda value
resultsOpt=minimize(fun_Neg,xy0,bounds=((xMin, xMax), (yMin, yMax)));
lambdaNegMin=resultsOpt.fun; #retrieve minimum value found by minimize
lambdaMax=-lambdaNegMin;
###END -- find maximum lambda -- END ###

#define thinning probability function
def fun_p(x,y):
return fun_lambda(x,y)/lambdaMax;

#for collecting statistics -- set numbSim=1 for one simulation
numbPointsRetained=np.zeros(numbSim); #vector to record number of points
xxAll=[]; yyAll=[];

### START -- Simulation section -- START ###
for ii in range(numbSim):
#Simulate a Poisson point process
numbPoints = np.random.poisson(lambdaMax*areaTotal);#Poisson number of points
xx = xDelta*np.random.uniform(0,1,numbPoints)+xMin;#x coordinates of Poisson points
yy = yDelta*np.random.uniform(0,1,numbPoints)+yMin;#y coordinates of Poisson points

#calculate spatially-dependent thinning probabilities
p=fun_p(xx,yy);

#Generate Bernoulli variables (ie coin flips) for thinning
booleRetained=np.random.uniform(0,1,numbPoints)<p; #points to be thinned

#x/y locations of retained points
xxRetained=xx[booleRetained]; yyRetained=yy[booleRetained];
numbPointsRetained[ii]=xxRetained.size;
xxAll.extend(xxRetained); yyAll.extend(yyRetained);
### END -- Simulation section -- END ###

#Plotting a simulation
fig1 = plt.figure();
plt.scatter(xxRetained,yyRetained, edgecolor='b', facecolor='none');
plt.xlabel("x"); plt.ylabel("y");
plt.title('A single realization of a Poisson point process');
plt.show();

#run empirical test on number of points generated
###START -- Checking number of points -- START###
#total mean measure (average number of points)
LambdaNumerical=integrate.dblquad(fun_lambda,xMin,xMax,lambda x: yMin,lambda y: yMax)[0];
#Test: as numbSim increases, numbPointsMean converges to LambdaNumerical
numbPointsMean=np.mean(numbPointsRetained);
#Test: as numbSim increases, numbPointsVar converges to LambdaNumerical
numbPointsVar=np.var(numbPointsRetained);
binEdges=np.arange(numbPointsRetained.min(),(numbPointsRetained.max()+1))-0.5;
pdfEmp, binEdges=np.histogram(numbPointsRetained, bins=binEdges,density=True);

nValues=np.arange(numbPointsRetained.min(),numbPointsRetained.max());
#analytic solution of probability density
pdfExact=(poisson.pmf(nValues,LambdaNumerical));

#Plotting
fig2 = plt.figure();
plt.scatter(nValues,pdfExact, color='b', marker='s',facecolor='none',label='Exact');
plt.scatter(nValues,pdfEmp, color='r', marker='+',label='Empirical');
plt.xlabel("n"); plt.ylabel("P(N=n)");
plt.title('Distribution of the number of points');
plt.legend();
plt.show();
###END -- Checking number of points -- END###

###START -- Checking locations -- START###
#2-D Histogram section
p_Estimate, xxEdges, yyEdges = np.histogram2d(xxAll, yyAll,bins=numbBins,density=True);
lambda_Estimate=p_Estimate*numbPointsMean;

xxValues=(xxEdges[1:]+xxEdges[0:xxEdges.size-1])/2;
yyValues=(yyEdges[1:]+yyEdges[0:yyEdges.size-1])/2;
X, Y = np.meshgrid(xxValues,yyValues) #create x/y matrices for plotting

#analytic solution of probability density
lambda_Exact=fun_lambda(X,Y);

#Plot empirical estimate
fig3 = plt.figure();
plt.rc('text', usetex=True);
plt.rc('font', family='serif');
ax=plt.subplot(211,projection='3d');
surf = ax.plot_surface(X, Y,lambda_Estimate,cmap=plt.cm.plasma);
plt.xlabel("x"); plt.ylabel("y");
plt.title('Estimate of $\lambda(x)$');
plt.locator_params(axis='x', nbins=5);
plt.locator_params(axis='y', nbins=5);
plt.locator_params(axis='z', nbins=3);
#Plot exact expression
ax=plt.subplot(212,projection='3d');
surf = ax.plot_surface(X, Y,lambda_Exact,cmap=plt.cm.plasma);
plt.xlabel("x"); plt.ylabel("y");
plt.title('True $\lambda(x)$');
plt.locator_params(axis='x', nbins=5);
plt.locator_params(axis='y', nbins=5);
plt.locator_params(axis='z', nbins=3);
###END -- Checking locations -- END###

For the histogram section, I used the histogram and histogram2d functions respectively to estimate the distribution of the number of points and the intensity function. I used the pdf option. (The SciPy website recommends not using the the normed option.)

Number of points

np.histogram(numbPointsRetained, bins=binEdges,density=True);

Locations of points

np.histogram2d(xxArrayAll, yyArrayAll,bins=numbBins,density=True);