Quantum-enhanced Markov chain Monte Carlo

The not-so-mathematical journal Nature recently published a paper proposing a new Markov chain Monte Carlo method:

  • 2023 – Layden, Mazzola, Mishmash, Motta, Wocjan, Kim, and Sheldon – Quantum-enhanced Markov chain Monte Carlo.

Appearing earlier as this preprint, the paper’s publication in such a journal is a rare event indeed. This post notes this, as well as the fact that we can already simulate perfectly1For small instances of the model, we can do this directly. For large instances, we can use coupling from the past proposed by Propp and Wilson. the paper’s test model, the Ising or Potts model.2Wilhelm Lenz asked his PhD student Ernst Ising to study the one-dimensional version of the model. Renfrey Potts studied the generalization and presented it in his PhD. But this is a quantum algorithm, which is exciting and explains how it can end up in that journal.

The algorithm

The paper’s proposed algorithm adds a quantum mechanical edge or enhancement to the classic Metropolis-Hastings algorithm.3More accurately, it should be called the Metropolis-Rosenbluth-Rosenbluth-Teller-Teller-Hastings algorithm. As I covered in a recent post, the original algorithm uses a Markov chain defined on some mathematical space. Running it on a traditional or classical computer, at each time step, the algorithm consists of proposing a random jump and then accepting the proposed jump or not. Owing to the magic of Markov chains, in the long run, the algorithm simulates a desired probability distribution.

The new quantum version of the algorithm uses a quantum computer to propose the jump, while still using a classical computer to accept the proposal or not.4In my Metropolis-Hastings post, the classical jumper process, a discrete-time Markov chain, is replaced with a quantum mechanical variant. The quantum jump proposals are driven by a time-independent Hamiltonian, which is a central object in quantum and, in fact, all physics. This leads to a Boltzmann (or Gibbs) probability distribution for the jumping process.

Then, running the quantum part on a quantum computer, the algorithm will hopefully outperform its classical counterpart. The paper nurtures this hope by giving empirical evidence of the algorithm’s convergence speed. The researchers performed the numerical experiments on a 27-qubit quantum processor at IBM using the platform Qiskit.

Quantum is so hot right now

In recent years researchers have been focusing on such algorithms that exploit the strangeness and spookiness of quantum mechanics. You will see more and more quantum versions of algorithms that appear in statistics, machine learning, and related fields, as suggested by this survey paper, which also appeared in Nature.

Quantum lite

Sometimes quantum mechanics only loosely inspires algorithms and models. In this setting, some of my machine learning work uses determinantal point processes. This kernel-based random model draws direct inspiration from the wave function, a standard object in quantum mechanics. Under suitable simplifying conditions, the model describes the locations of particles known as fermions such as electrons and protons. Still, it’s fascinating that a quantum physics model inspired an interesting random object that has found applications in spatial statistics and machine learning.

Connectivity in device-to-device networks in Poisson-Voronoi cities

Here’s a recently uploaded manuscript:

  • 2023 – Keeler, Błaszczyszyn, Cali – Connectivity and interference in device-to-device networks in Poisson-Voronoi cities.

https://arxiv.org/abs/2309.02137

This work presents numerical results complementing mathematical work carried out by us. The work concerns (continuum) percolation results for a special network model based on Poisson-Voronoi tessellations.

The most relevant work are these two papers (the first being somewhat seminal):

  1. Dousse, Franceschetti, Macris, Meester, Thiran, Percolation in the signal to interference ratio graph, 1996.
  2. Le Gall, Błaszczyszyn, Cali, and En-Najjary, Continuum line-of-sight percolation on Poisson-Voronoi tessellations, 2021

Our work effectively seeks to combine these two papers. We obtain the equivalents results from the first paper by coupling its connectivity model with the connectivity model and network model (based on a Cox point process) presented in the second paper.

If you want a more detailed version, here’s the abstract:

To study the overall connectivity in device-to-device networks in cities, we incorporate a signal-to-interference-plus-noise connectivity model into a Poisson-Voronoi tessellation model representing the streets of a city. Relays are located at crossroads (or street intersections), whereas (user) devices are scattered along streets. Between any two adjacent relays, we assume data can be transmitted either directly between the relays or through users, given they share a common street. Our simulation results reveal that the network connectivity is ensured when the density of users (on the streets) exceeds a certain critical value. But then the network connectivity disappears when the user density exceeds a second critical value. The intuition is that for longer streets, where direct relay-to-relay communication is not possible, users are needed to transmit data between relays, but with too many users the interference becomes too strong, eventually reducing the overall network connectivity. This observation on the user density evokes previous results based on another wireless network model, where transmitter-receivers were scattered across the plane. This effect disappears when interference is removed from the model, giving a variation of the classic Gilbert model and recalling the lesson that neglecting interference in such network models can give overly optimistic results. For physically reasonable model parameters, we show that crowded streets (with more than six users on a typical street) lead to a sudden drop in connectivity. We also give numerical results outlining a relationship between the user density and the strength of any interference reduction techniques.

In future posts I’ll detail the above work as well as our more mathematical work on this type of percolation model.

A Fields Medal goes to another percolation researcher

The Fields Medal is a prize in mathematics awarded every four years to two to four outstanding researchers (forty years old or younger) working in mathematics. One of the medals this year was awarded to French mathematician Hugo Duminil-Copin who has solved problems and obtained new results in the percolation theory which lies in the intersection of probability and statistical physics. Here’s a good Quanta article on Duminil-Copin and some of his work.

(The other winners are June Huh, James Maynard, and Maryna Viazovska.)

The Fields Medal people has been kind to probability researchers in recent years. Previous winners working in probability have included Wendelin Werner (2006), Stanislav Smirov (2010), and Martin Hairer (2014), while other winners in recent years have also made contributions to probability.

All in all, that’s not too shabby for a discipline that for a long, long time wasn’t considered part of mathematics.  (That story deserves a post on its own.)

I work nowhere near Duminil-Copin, but I have used some percolation theory in my work. I will write a couple of posts on percolation theory. Eventually, I may even mention some recent work that my collaborators and I have been working on.

Frozen code

My simulation code has been frozen and buried in Norway. Well, some of my code that I keep on a GitHub repository has become part of a code preservation project. Consequently, beneath my profile it reads:

Arctic Code Vault Contributor

This is part of what is called the GitHub Archive Program. The people behind it aim to preserve large amounts of (open source) code for future generations in thousands and thousands of years time. But how do they do that?

Well, basically, the good people at GitHub chose and converted many, many, many lines of code into certain error-resistant formats, such as QR code. They then printed it all out and buried it deep in an abandoned mine shaft in frozen Norway. (The frozen and stable Norway is also home to a famous seed bank.)

My code in this project includes most of the code that has appeared in these posts. Of course my contribution is just a drop in the vast code ocean of this project. In fact, at least two or three of my colleagues have also had their code put into deep freeze.

Still, it’s a nice thought to know that stuff I wrote, including code for these very posts, will be potentially around for a very long time.

Coverage probability in wireless networks with determinantal scheduling

My collaborators and I uploaded a manuscript:

  • Błaszczyszyn, Brochard, and Keeler, Coverage probability in wireless networks with determinantal scheduling.

https://arxiv.org/abs/2006.05038

Details

The paper builds off some previous work by us that uses a (relatively) new model in machine learning:

  • Błaszczyszyn and Keeler, Determinantal thinning of point processes with network learning applications.

https://arxiv.org/abs/1810.08672

The new machine learning model is based on a special type of point process called a determinantal point process. It was originally called a Fermion point process. These are useful point processes as the exhibit certain closure properties under certain operations such independent thinning.

Kulesza and Taskar introduced and developed the framework for using determinantal point processes for machine learning models.

Code

The MATLAB code for the producing the results in the paper can be found here:

https://github.com/hpaulkeeler/detcov_matlab

I also re-wrote the MATLAB code into Python:

https://github.com/hpaulkeeler/detcov_python

Some remarks regarding “On the Laplace Transform of the Aggregate Discounted Claims with Markovian arrivals”

Google Scholar has requested me to make available a freely available copy of this published comment my former PhD supervisor and I wrote a few years ago:

  • Keeler and Taylor, Some remarks regarding “On the Laplace Transform of the Aggregate Discounted Claims with Markovian arrivals”

OK, Google Scholar, here’s the manuscript that we submitted, which is basically the same as the published version.

The comment and the original paper cover an insurance model (for aggregate claims) that uses a Markov arrival process. Such stochastic processes use matrix theory. But there was a small error in the original paper, where commutativity had been assumed, which is clearly not the case in general for matrices. Despite this error, the incorrect solution gave surprisingly accurate answers, so we investigated why that was.

Nothing groundbreaking here by us. We were just curious.

Determinantal thinning of point processes with network learning applications

My colleague and I uploaded a manuscript:

  • Błaszczyszyn and Keeler, Determinantal thinning of point processes with network learning applications.

https://arxiv.org/abs/1810.08672

Details

The paper uses a (relatively) new model framework in machine learning.  This framework is based on a special type of point process called a determinantal point process, which is also called a fermion point process. (This particle model draw inspiration from the form of the wave function in quantum mechanics.) Kulesza and Taskar introduced and developed the framework for using determinantal point processes for machine learning models.

Code

The MATLAB code for the producing the results in the paper can be found here:

https://github.com/hpaulkeeler/DetPoisson_MATLAB

I also re-wrote (or translated) the MATLAB code into Python:

https://github.com/hpaulkeeler/DetPoisson_Python

An Improbable Start

This web log or blog, to the use parlance of our times, is a place for me to discuss and explains problems or research ideas that I am working on or just find interesting. The emphasis will be on words over equations, with the aim of trying to give an intuitive explanation for mathematical concepts encountered in applied probability.

Much of my work involves the use of random simulations, which are also called stochastic or Monte Carlo simulations, so I will often be posting on ideas illustrated with simulations. Most of my experience is using MATLAB, so that will be my default programming language, but I also have experience in the statistics-focused language R, which arguably has the best spatial statistics package spatstat going around. I also use Python coupled with appropriate libraries such as NumPy, especially for machine learning work.

I am considering learning other languages to do random simulation work. A possible candidate here is the relatively new Julia language, although nothing seems to compete against MATLAB in terms of user friendliness.

Feel free to contact me for questions or to point out mistakes. I do appreciate it. I may even reply.

Block arrivals in the Bitcoin blockchain

My collaborators and I recently uploaded a preprint:

  • Bowden, Keeler, Krzesinski, Taylor, Block arrivals in the Bitcoin blockchain, 2018.

The hint is in the title. In this work we study the arrivals of blocks in the Bitcoin blockchain by using both empirical (statistical) methods and mathematical (stochastic) models. To do this, we set up Bitcoin nodes around the world and collected blockchain mining data.

One of the main observations is that, according to standard tests, Bitcoin blocks do not arrive according to a Poisson stochastic process. The Poisson assumption, which seems very reasonable, appeared in the original white paper by the pseudonymous Satoshi Nakamoto:

  • Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System, 2008.

Going back to our newly uploaded work, here’s the abstract giving some more details:

Bitcoin is a electronic payment system where payment transactions are verified and stored in a data structure called the blockchain. Bitcoin miners work individually to solve a computationally intensive problem, and with each solution a Bitcoin block is generated, resulting in a new arrival to the blockchain. The difficulty of the computational problem is updated every 2,016 blocks in order to control the rate at which blocks are generated. In the original Bitcoin paper, it was suggested that the blockchain arrivals occur according to a homogeneous Poisson process. Based on blockchain block arrival data and stochastic analysis of the block arrival process, we demonstrate that this is not the case. We present a refined mathematical model for block arrivals, focusing on both the block arrivals during a period of constant difficulty and how the difficulty level evolves over time.

This new work is a spiritual sequel to a paper that some of us published a couple of years ago:

  • Göbel, Keeler, Krzesinski, Taylor, Bitcoin Blockchain Dynamics: the Selfish-Mine Strategy in the Presence of Propagation Delay, 2016.

A preprint of this paper can be found here.