The Metropolis-Hastings algorithm in C with multi-variable densities

Here’s a C implementation of the Metropolis(-Rosenbluth-Rosenbluth-Teller-Teller)-Hastings algorithm that can handle joint multi-variable probability densities, thus simulating a finite number of random variables. The Metropolis-Hastings algorithm is the central piece in Markov chain Monte Carlo (MCMC) methods. They have become essential in Bayesian statistics, where they are used to tackle high-dimensional integration. I have written a couple posts about these methods, starting with this one and ending with one.

My previous C code only works with densities of a couple variables.  That code is covered in this post, which in turn is based on a previous post, in which I presented the algorithm implemented in the scientific programming languages Python, Julia, MATLAB and R. Those examples were for mostly illustration purposes, as there are already good pre-written libraries in those languages.

Code considerations

I’ll describe some considerations for implementing this algorithm in C. Some of this overlaps with what I wrote on the previous post. In writing this more general version of the code, I arguably cleaned up the old code, which is often the case when you generalize things in coding and mathematics.

Storing multi-dimensional values in C

In C, when handling sets of numbers, such as vectors and matrices, one has to use pointers and the malloc function more often that not, which can create headaches. For this algorithm, probably the most important data object is the one for storing the (current) positions of the multi-dimensional random walks across all the simulations run. This conceptually results in a rectangular grid or table of numbers. In mathematics, you would just use a \(d \times n\) matrix, where is \(d\) is the number of dimensions and \(n\) is the number of simulation runs.

In most programming languages, this table of numbers typically corresponds to a \(d \times n\) array. More specifically in C, you could do this with a multi-dimensional array or an array of pointers. But in this case, you don’t need to use the inherent structure of a matrix. Besides, any multi-dimensional array in C will be stored in memory as a one-dimensional array, so you can store the numbers in a simple one-dimensional array or vector with \(d \cdot n\) elements in total.1I’ve used the \(\cdot\) notation here to highlight the multiplication. The code does this using a single pointer.

Then using this vector, you just need to index (or map) between the vector and the matrix appropriately, which the code does with the (integer) variable indexSimDim. There are simple one-to-one mappings between elements in the vector and the elements in \(d \times n\) matrix. For example, matrix row \(i\) and column \(k\) corresponds to the element number \(m=i\cdot d+k\) in the vector; see the code below.

unsigned indexSimDim; // index for keeping track of two-dimensional data as a one-dimensional array // indexSimDim = i * numbDim + k, where i is simulation number and k is dimension number (minus one)

This is a standard trick. In fact, you’ll see this type of line of code as necessary boilerplate in CUDA-based (and similar) code in kernels (routines) due to the inherent grid hierarchy of graphical processing units (GPUs). But that discussion deserves a post on its own.

Randomness in C

In previous posts, I’ve remarked that the standard uniform random number generator in C, called rand,  is not good enough for research level randomness. That said, it works fine for regular simulations. The Mersenne Twister is a widely recommended and used algorithm for producing such numbers.

To simulate normal (or Gaussian) random variables, I wrote my own simple function using the Box-Muller transform, which I covered in a previous post, so the code would be self-contained. But in practice, you should always use pre-written and tested functions.

Code

The code can be found here, whereas other MCMC code can be found here and here. The joint probability density is defined in the function pdf_single.

/***********************************************************************
 * Runs a simple Metropolis-Hastings (ie MCMC) algorithm to simulate n
 * jointly distributed random variables with probability density p(x,y).
 * For example:
 * p(x,y)=exp(-(x^4+x*y+y^2+y*z+z^4)/s^2)/consNorm, where s>0 and consNorm is a
 * normalization constant. The probability density function is defined in
 * the function pdf_single.
 *
 * NOTE: In practice, the value of the normalization constant is not needed, as it cancels out in the algorithm.
 *
 * NOTE: This code will *create* a local file (see variable strFilename) to store results. It will *overwrite* that file if it already exists.
 *
 * WARNING: This code uses the default C random number generator, which is known for failing various tests of randomness.
 * Strongly recommended to use another generator for purposes beyond simple illustration.
 *
 * Author: H. Paul Keeler, 2024.
 * Website: hpaulkeeler.com
 * Repository: github.com/hpaulkeeler/posts
 *
 ***********************************************************/

#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <math.h>
#include <stdbool.h>
#include <string.h>

const long double pi = 3.14159265358979323846; // constant pi for generating polar coordinates

// helper function declarations; see below for definitions
static double *unirand(double *randValues, unsigned numbRand);                           // generate  uniform random variables on (0,1)
static double *normrand(double *randValues, unsigned numbRand, double mu, double sigma); // generate normal random variables
static double pdf_single(double *x_input, unsigned numbDim, double *parameters);           // define probability density to be simulated
static double mean_var(double *set_sample, unsigned numbSim, double *varX);              // calculate meana and variance

int main(int argc, char *argv[])
{

    if (argc > 1)
    {
        fprintf(stderr, "This program takes no arguments...\n");
        exit(1);
    }
    else
    {
        char strFilename[] = "MCMCData_ND.csv"; // filename for storing simulated random variates

        // intializes (pseudo)-random number generator
        time_t timeCPU; // use CPU time for seed
        srand((unsigned)time(&timeCPU));
        // srand(42); //to reproduce results

        bool booleWriteData = true; // write data to file
        bool booleStats = true;     // perform simple mean/std stats
        unsigned numbDimMax = 3; //upper bound on number of dimensions for which stats are calculated and printed out

        // simulation parameters
        unsigned numbSim = 1e4;   // number of random variables simulated
        unsigned numbSteps = 200; // number of steps for the Markov process
        double sigma = 2;         // standard deviation for normal random steps

        // probability density parameters
        double s = .5; // scale parameter for distribution to be simulated
        unsigned numbDim = 3;

        // Metropolis-Hastings variables
        // proposal for a new position in the random walk
        double *tRandProposal = (double *)malloc(numbDim * sizeof(double));
        double pdfProposal;      // density for proposed position
        double pdfCurrent;       // density of current position
        double ratioAccept;      // ratio of densities (ie acceptance probability)
        double uRand;            // uniform variable for Bernoulli trial (ie a coin flip)
        // random step (normally distributed)
        double *p_numbNormT = (double *)malloc(1 * sizeof(double));
        // positions of the random walk (ie the simualted random variables after numbSteps)
        double *p_tRand = (double *)malloc(numbDim * numbSim * sizeof(double));

        unsigned i, j, k;     // loop varibales
        unsigned indexSimDim; // index for keeping track of two-dimensional data as a one-dimensional array
        // Typically indexSimDim = i * numbDim + k, where i is simulation number and k is dimension number (minus one)

        double *p_tRandCurrent = (double *)malloc(numbDim * sizeof(double));
        (void)unirand(p_tRand, numbDim * numbSim); // random initial values

        for (i = 0; i < numbSim; i++)
        {
            // loop through each random walk instance (or random variable to be simulated)
            for (k = 0; k < numbDim; k++)
            {
                // loop through dimensions
                indexSimDim = i * numbDim + k;
                // update state of random walk / Markov chain
                *(p_tRandCurrent + k) = *(p_tRand + indexSimDim);
            }

            pdfCurrent = pdf_single(p_tRandCurrent, numbDim, & s); // current probability density

            for (j = 0; j < numbSteps; j++)
            {
                // loop through each step of the random walk
                for (k = 0; k < numbDim; k++)
                {
                    // loop through dimensions
                    indexSimDim = i * numbDim + k;
                    (void)normrand(p_numbNormT, 1, 0, sigma);
                    // take a(normally distributed) random step in x, y and y
                    *(tRandProposal+k) = *(p_tRand + indexSimDim) + *(p_numbNormT);
                }
                pdfProposal = pdf_single(tRandProposal, numbDim, & s); // proposed probability density

                // acceptance rejection step
                (void)unirand(&uRand, 1);
                ratioAccept = pdfProposal / pdfCurrent;
                if (uRand < ratioAccept)
                {
                    for (k = 0; k < numbDim; k++)
                    {
                        // loop through dimensions
                        indexSimDim = i * numbDim + k;
                        // update state of random walk / Markov chain
                        *(p_tRand + indexSimDim) = tRandProposal[k];
                    }
                    pdfCurrent = pdfProposal;
                }
            }
        }

        free(p_numbNormT);

        if (booleStats)
        {
            // initialize statistics variables (for testing results)
            double *p_AllRand = (double *)malloc(numbSim * sizeof(double));
            double meanTemp = 0;
            double varTemp = 0;
            double stdTemp = 0;
            unsigned numbDimStats = fmin(numbDimMax, numbDim); //number of dimensions for which stats are calculated and printed out
            for (k = 0; k < numbDimStats; k++)
            {
                // loop through all the dimensions
                for (i = 0; i < numbSim; i++)
                {
                    // collect variables for dimension k+1
                    indexSimDim = i * numbDim + k;
                    *(p_AllRand + i) = *(p_tRand + indexSimDim);
                }
                meanTemp = mean_var(p_AllRand, numbSim, &varTemp);
                stdTemp = sqrt(varTemp);
                printf("The average of dimension %d random variables is %lf.\n", k + 1, meanTemp);
                printf("The standard deviation of dimension %d random  variables is %lf.\n", k + 1, stdTemp);
            }
        }

        if (booleWriteData)
        {
            // print to file
            FILE *outputFile;
            outputFile = fopen(strFilename, "w");

            // create string of spacers (ie commas and newlines)
            char *strSpacer = (char *)malloc((numbDim + 1) * sizeof(char));
            for (k = 0; k < numbDim - 1; k++)
            {
                *(strSpacer + k) = ',';
            }
            strSpacer[numbDim - 1] = '\n';
            strSpacer[numbDim] = '\0';
            for (i = 0; i < numbSim; i++)
            {
                for (k = 0; k < numbDim; k++)
                {
                    indexSimDim = i * numbDim + k;
                    fprintf(outputFile, "%lf%c", *(p_tRand + indexSimDim), strSpacer[k]);
                }
            }

            fclose(outputFile);
            printf("Data printed to file.\n");
        }
        free(p_tRand);

        return (0);
    }
}

static double pdf_single(double *x_input, unsigned numbDim, double *parameters)
{
    // returns the probability density of a single point inside a simulation window defined below
    
    double pdf_output = 0; //probability density at a single point
    
    // non-zero density square window parameters
    double xMin = -1;
    double xMax = 1;

    // retrieve variables
    double x = *(x_input + 0);
    double y = *(x_input + 1);
    double z = *(x_input + 2);
    // retrieve scale parameter
    double s = *(parameters + 0);

    int i;
    //check point is inside simulation window
    bool booleInsideWindow = true;
    for (i = 0; i < numbDim; i++){
        booleInsideWindow = booleInsideWindow &1;
    }

    // define probability density
    if (booleInsideWindow)
    {
        // evaluate probability density
        pdf_output = exp(-((pow(x, 4) + x * y + pow(y, 2) + y * z + pow(z, 4)) / (s * s)));
    }
    else
    {
        pdf_output = 0;
    }
    return pdf_output;
}

static double *normrand(double *randValues, unsigned numbRand, double mu, double sigma)
{
    // simulate pairs of iid normal variables using Box-Muller transform
    // https://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform

    double U1, U2, thetaTemp, rhoTemp, Z1, Z2;
    int i = 0;
    while (i < numbRand)
    {
        // simulate variables in polar coordinates (theta, rho)
        (void)unirand(&U1, 1);
        thetaTemp = 2 * pi * U1; // create uniform theta values
        (void)unirand(&U2, 1);
        rhoTemp = sqrt(-2 * log(U2)); // create Rayleigh rho values

        // change to Cartesian coordinates (Z1, Z2)
        Z1 = rhoTemp * cos(thetaTemp);
        Z1 = sigma * Z1 + mu;
        randValues[i] = Z1; // assign first of random variable pair
        i++;
        if (i < numbRand)
        {
            // if more variables are needed, generate second value of random pair
            Z2 = rhoTemp * sin(thetaTemp);
            Z2 = sigma * Z2 + mu;
            randValues[i] = Z2; // assign second of random variable pair
            i++;
        }
        else
        {
            break;
        }
    }
    return randValues;
}

static double *unirand(double *randValues, unsigned numbRand)
{ // simulate numbRand uniform random variables on the unit interval
  // storing them in randValues which must be allocated by the caller
  // with enough space for numbRand doubles

    for (int i = 0; i < numbRand; i++)
    {
        randValues[i] = (double)rand() / RAND_MAX;
    }
    return randValues;
}

static double mean_var(double *set_sample, unsigned numbSim, double *varX)
{
    // mean and variance of set_sample
    int i;
    // initialize statistics variables (for testing results)
    double meanX = 0;
    double meanXSquared = 0;
    double tempX;
    for (i = 0; i < numbSim; i++)
    {
        tempX = *(set_sample + i);
        meanX += tempX / ((double)numbSim);
        meanXSquared += tempX * tempX / ((double)numbSim);
    }

    *varX = meanXSquared - meanX * meanX;
    return meanX;
}
  1. x_input[i] >= xMin) & (x_input[i] <= xMax []

The first MC in MCMC methods

Markov chains form a fundamentally important class of stochastic processes. It would be hard to over stress their importance in probability, statistics, and, more broadly, science and technology. They are indispensable in random simulations, particularly those based on the Markov chain Monte Carlo methods. In this post, we’ll have a look some Markov chain basics needed for such simulation methods.

This is the first part of a series of posts on Mark chain Monte Carlo methods. This post covers the basics of Markov chains, which is the more involved part. The second part will cover Monte Carlo methods. The third part will combine the ideas from the first two parts. Overall, the three posts will sketch the mechanics of Markov chain Monte Carlo (MCMC) methods.

Markov chains vs Markov processes

All Markov chains are Markov processes. Some people use the term Markov chain to refer to discrete-time Markov processes with general state spaces. Other people prefer the term Markov chain for continuous-time Markov processes with countable state spaces.1In his book Applied Probability and Queues Asmussen writes:

In this book, we use the terminology that a Markov chain has discrete time and a Markov process has continuous time (the state space may be discrete or general). However, one should note that it is equally common to let “chain” refer to a discrete state space and “process” to a general one (time may be discrete or continuous).

Nevertheless, the first MC in the MCMC suggests the Markov chain Monte Carlo crowd prefers the former sense of Markov chain, given the use of discrete-time Markov processes in their simulations.

Markov the frog

Some writers introduce Markov chains with a mental image of a frog jumping around lily pads scattered over a pond. (Presumably the frog never misses a lily pad.) We assume the frog randomly chooses the next lily pad through some random mechanism. Perhaps the distances between lily pads or their sizes influence the chances that frog will jump between them.

We further assume that the frog is a bit particular, preferring to jump in certain directions more than others. More precisely, the probability of our frog jumping from a lily pad labelled \(x\) to another labelled \(y\) is \(P(x,y)\). But jumping in the opposite direction happens with probability \(P(y,x)\), which in general is not equal to \(P(x,y)\).

I typically use the term points, but the Markov literature usually says that the Markov chain visits states.

State space

We can interpret a Markov chain, a type of stochastic process, as a collection or sequence of random variables. 2I briefly detailed stochastic processes in a previous post.The values of the random variables are points in some mathematical space \(\mathbb{X}\). This space can be quite abstract, but in practice it’s usually the lattice \(\mathbb{Z}^n\), Euclidean space \(\mathbb{R}^n\), or a subset of one of these two spaces. For our frog example, all the lily pads in the pond form the state space.

We’ll only consider countable Markov chains where the number of points in the state space \(\mathbb{X}\) is countable. Although the results and theory generally hold for more general state spaces, the accompanying work requires more technical mathematics. For finite and countable state spaces, we can use standard probability and matrix knowledge. But when we use uncountable state spaces such as \(\mathbb{R}^n\), we enter the world of measure theory and functional analysis.

I will often write a point \(x\) in a (state) space \(\mathbb{X}\). But you can say an element \(x\) of a set \(\mathbb{X}\). Many authors refers to the points or elements as states of the Markov chain. In the frog example, each lily pad is a different state.

Markov property

A discrete-time countable Markov chain is a random process that jumps between points of some countable mathematical space \(\mathbb{X}\) such that, when at point \(x \in \mathbb{X}\), the next position is chosen according to a probability distribution \(P(x,·)\) depending only on \(x\).

More specifically, a sequence of random variables \((X_0, X_1, . . .)\) is a discrete-time Markov chain \(X\) with a countable state space
\(\mathbb{X}\) and kernel \(P\) if for all \(x,y \in \mathbb{X}\) and all \(t \geq 1\) satisfying \(\mathbb{P}[X_{t−1}=x_{t-1},\dots,X_0=x_0]>0\), we have

$$ \begin{align}\mathbb{P}[X_{t+1} =y|&X_{t}=x,X_{t−1}=x_{t-1},\dots,X_0=x_0]\\&=\mathbb{P}[X_{t+1} =y|X_t =x]\\&=P(x,y)\,.\end{align}$$

This equation is often called the Markov property.

The Markov property says that the conditional probability of jumping from point \(x\) to \(y\) remains the same, regardless of which points or states \(x_0,x_1,\dots,x_{t-1}\) were previously visited. This is precisely why the kernel \(P\) contains all the information needed to describe the future random evolution of the Markov chain.

We have assumed the probabilities given by \(P\) are fixed, meaning we have described a homogeneous Markov chain.

Markov kernel

The kernel \(P\) is called the Markov (transition) kernel or probability kernel. Assuming a countable state space \(\mathbb{X}\), we can reference any probability value of the kernel \(P\) with two variables \(x,y\in\mathbb{X}\). If we assume a finite state space \(\mathbb{X}\), then the kernel \(P\) becomes a regular matrix taught in linear algebra. An infinite but countable state space gives an infinite matrix \(P\). The rows of the kernel matrix \(P\) must add up to one, because each row is a probability measure.

A more general space, such as Euclidean space \(\mathbb{R}^n\), results in a more general kernel with respect to a suitable measure. In this setting, \(P(x,·)\) is no longer a probability mass function, but a general probability measure.

Initial distribution

At time \(t=0\) we describe the random initial configuration of a Markov process with a probability distribution \(\mu_0\). For a finite or countable Markov chain, this initial distribution \(\mu_0\) corresponds to a probability mass function encoded as a row vector.

Jumping from \(x\) to \(y\)

The probability distribution \(\mu_0\) gives the probability of starting in state (or at point) \(x\in\mathbb{X}\). After one time step, we can write down the probability distribution \(\mu_1\) that gives us the different probabilities of the Markov chain being at different states. At \(n=1\), basic matrix algebra and probability rules give us the matrix equation

$$\mu_1=\mu_0 P$$

By induction, after \(t\) time steps we have the expression

$$\mu_n=\mu_0 P^n\,.$$

where the superscript \(n\) denotes matrix power. We can write the \(n\)-time step kernel as \(P_{(n)}\), which for a finite Markov chain is given by the matrix equation \(P_{(n)}=P^n\).

Seeing how \(P_{(n)}\) behaves as \(n\) approaches infinity forms part of work that studies the convergence and ergodicity properties of Markov chains. I’ll make these concepts clearer below. But first I’ll give some conditions that are typically needed.

Regularity conditions

A Markov chain with a countable state space needs some conditions to ensure convergence and ergodicity.

Regularity conditions

  1. A stationary distribution \(\pi\)
  2. Aperiodicity
  3. Irreducibility
  4. Postive recurrence

The nature of the state space and the kernel will dictate these conditions. These conditions are also not necessarily logically distinct. For example, on a finite state space, you’ll get positive recurrence for free, because an aperiodic, irreducible Markov chain with a finite state space is always positive recurrent.

We now briefly detail these conditions and in another post I’ll give examples how the conditions can be met.

Stationary distribution \(\pi\)

It’s possible to encounter a probability distribution \(\pi\) where applying the kernel \(P\) returns the same distribution \(\pi\), meaning

$$ \pi=\pi P\,.$$

This (fixed-point) equation is called the balance equation.

The distribution \(\pi\) is called the stationary, invariant or steady-state distribution. A Markov chain does not need to have a stationary distribution. And if a Markov chain does have one, it may not be unique. Its existence and uniqueness will depend on the Markov kernel \(P\).

Showing that a unique stationary distribution exists and it is possible to reach it with probability one is the stuff of Markov convergence results. Markov chain Monte Carlo methods hinge upon these results .

Aperiodicity

It is possible for a Markov chain to get trapped in a loop, periodically visiting the same states. The period \(d_x\) of a state \(x\in \mathbb{x}\) is the greatest common divisor of all \(n\) values such that \(P(x,x)^n>0\). If the period of a point is \(d_x=1\), then we say it’s aperiodic. If every state of a Markov chain is aperiodic, we says it’s an aperiodic Markov chain.

Aperiodicity means there are no loops to trap the Markov chain. This property is typically needed for convergence results.

Irreducibility

A Markov chain with a countable state space \(\mathbb{X}\) is irreducible if the Markov chain can go from any point \(x\in\mathbb{X}\) to another other point \(x\in\mathbb{X}\) with a positive probability in a finite number of time steps. In other words, there exists a natural number \(s\) such that \(P(x,y)^s>0\) for all \(x,y\in\mathbb{X}\).

Irreducibility ensures that a Markov chain will visit all the states in its state space. This property is also needed for convergence results.

Recurrence

When studying Markov processes, a quantity of interest is how much time it takes to return to a state or point. For a point \(x\in\mathbb{X}\), we define its first return time as

$$ T_x^+=\min\{ t\geq 1: X_t=x\} \,.$$

As the name suggests, this random variable is the number of time steps for the Markov process return to state \(x\), taking whichever path, conditioned on it starting at \(x\).

We call a state \(x\) recurrent if the probability of its first return time being finite is one, meaning \(\mathbb{P}_x(T_x^+<\infty)=1\). Otherwise the state \(x\) is said to be transient.

Positive recurrence

We can classify different types of recurrence based on the expected value of the first return times. A state \(x\) is called positive recurrent if the expected value of its first return time is finite, meaning \(\mathbb{E}_x(T_x^+)<\infty\). Otherwise state \(x\) is null recurrent.

For a countable Markov chain, if all the states in the state space are (positive) recurrent, so \(\mathbb{E}_x(T_x^+)<\infty\) for all \(x\in\mathbb{X}\), then we say the Markov chain is (positive) recurrent.

Again, the concept of positive recurrence is needed for convergence results.

Ergodicity

We say a countable Markov chain is ergodic if it is irreducible, aperiodic and positive recurrent.3See, for example, Basics of Applied Stochastic Process by Serfozo (page 26) or Probability Theory and Stochastic Processes by Bremaud (page 262). Ergodicity allows one to find averages by employing a more general form of the law of large numbers, which Monte Carlo methods rely upon. We stress that definitions of ergodicity vary somewhat, but in general it means convergence and laws of large numbers exists.

Further reading

Any stochastic process book will include a couple of chapters on Markov chains such as:

    • Brémaud – Probability Theory and Stochastic Processes;
    • Serfozo -Basics of Applied Stochastic Processes.

For more details, there are many books dedicated entirely to the subject of Markov chains. For example, introductory books include:

  • Brémaud – Markov Chains, Gibbs Fields, Monte Carlo Simulation and Queues;
  • Levin, Peres, and Wilmer – Markov Chains and Mixing Times;
  • Norris – Markov Chains.

Those books cover Markov chains with countable state spaces. If you want to read about discrete-time Markov chains with general state spaces, try the book:

  • Meyn, Tweedie – Markov chains and stochastic stability

All the above books have a section on Markov chain Monte Carlo methods, such as the Metropolis-Hastings algorithm or the Gibbs sampler.