Determinantal learning for selecting subsets in wireless networks

We can’t all talk all the time, while still being heard. That’s true for people and it’s true for wireless networks.

Networks need to schedule when their network nodes can transmit and receive data. In other words, a collection or subset of network nodes is chosen to transmit, while another subset is chosen to receive. (In telecommunication language, this is part of what is called the medium access control or MAC layer.)

The classic scheduler (spatial) Aloha involves each node flipping a biased coin with probability \(p\) of heads occurring. If heads comes up, a node can transmit. If not, it must stand ready to receive data. Simple.

But can we do better?

Determinantal scheduling

A few years ago, a couple collaborators and I become interested in this problem. To develop an intelligent (or, at least, not stupid) scheduler, we adapted a machine (or statistical) learning framework and published the results in a conference paper:

  • 2020 – Błaszczyszyn, Brochard, and Keeler, Coverage probability in wireless networks with determinantal scheduling.

The hint is in the title. Using a determinantal scheduler, we derived mathematical expressions for the coverage probability. I discussed that paper in an earlier post. I produced the numerical results with code located here and here.

From fermions to generative model

We used a learning framework based on determinantal point process, which were originally used to model subatomic particles called fermions such as electrons. A key property is that determinantal points, like fermions, repel each other, so this is a random model for diversity, anti-clustering or negative correlations.

Defined with a kernel matrix and determinants, these random models have convenient mathematical properties. For example, you can both simulate them (exactly) and train (or fit) them with maximum likelihood methods. Using this fermion model, Kulesa and Taskar proposed and pioneered a machine learning framework.

When properly trained (or fitted), determinantal models can act as generative artificial intelligence (or Gen AI) models. In some sense, these point processes are special types of Gibbsian point processes, which are defined with a general energy function called a Hamiltonian. In the field of AI and computer science more broadly, there are many energy-based models, some of which are used for generative models such as Boltzmann machines.

Recent work

A few days ago I was curious what was happening in this research area, and by chance, I noticed this recently uploaded preprint:

  • 2025 – Tu, Saha, and Dhillon –  Determinantal Learning for Subset Selection in Wireless Networks.

I feel more work remains to be done in this area.

Further reading

Wireless networks

Our work was inspired by an earlier paper that we did:

    • 2019 – Błaszczyszyn and Keeler, Determinantal thinning of point processes with network learning applications.

In that paper, which I mentioned in an earlier post, we discussed the potential of using determinantal point processes for scheduling network transmission (or selecting subsets). We pointed out that such a model could be trained on optimized network configurations.

The new work by Tu, Saha, and Dhillon is based on previous work, such as this ambitiously-titled paper:

  • 2019 – Saha and Dhillon – Machine Learning meets Stochastic Geometry: Determinantal Subset Selection for Wireless Networks.

This work by Saha and Dhillon work was independent of our work on scheduling, but came out around the same time.

Machine learning

I should stress that none of the above work was possible without the pioneering efforts by Kulesza and Taskar in a series of papers, much of which appears in their book:

  • 2012 – Kulesza and Taskar – Determinantal point processes for machine learning, Now Publishers.

You can find a version of the book here.

Quantum-enhanced Markov chain Monte Carlo

The not-so-mathematical journal Nature recently published a paper proposing a new Markov chain Monte Carlo method:

  • 2023 – Layden, Mazzola, Mishmash, Motta, Wocjan, Kim, and Sheldon – Quantum-enhanced Markov chain Monte Carlo.

Appearing earlier as this preprint, the paper’s publication in such a journal is a rare event indeed. This post notes this, as well as the fact that we can already simulate perfectly1For small instances of the model, we can do this directly. For large instances, we can use coupling from the past proposed by Propp and Wilson. the paper’s test model, the Ising or Potts model.2Wilhelm Lenz asked his PhD student Ernst Ising to study the one-dimensional version of the model. Renfrey Potts studied the generalization and presented it in his PhD. But this is a quantum algorithm, which is exciting and explains how it can end up in that journal.

The algorithm

The paper’s proposed algorithm adds a quantum mechanical edge or enhancement to the classic Metropolis-Hastings algorithm.3More accurately, it should be called the Metropolis-Rosenbluth-Rosenbluth-Teller-Teller-Hastings algorithm. As I covered in a recent post, the original algorithm uses a Markov chain defined on some mathematical space. Running it on a traditional or classical computer, at each time step, the algorithm consists of proposing a random jump and then accepting the proposed jump or not. Owing to the magic of Markov chains, in the long run, the algorithm simulates a desired probability distribution.

The new quantum version of the algorithm uses a quantum computer to propose the jump, while still using a classical computer to accept the proposal or not.4In my Metropolis-Hastings post, the classical jumper process, a discrete-time Markov chain, is replaced with a quantum mechanical variant. The quantum jump proposals are driven by a time-independent Hamiltonian, which is a central object in quantum and, in fact, all physics. This leads to a Boltzmann (or Gibbs) probability distribution for the jumping process.

Then, running the quantum part on a quantum computer, the algorithm will hopefully outperform its classical counterpart. The paper nurtures this hope by giving empirical evidence of the algorithm’s convergence speed. The researchers performed the numerical experiments on a 27-qubit quantum processor at IBM using the platform Qiskit.

Quantum is so hot right now

In recent years researchers have been focusing on such algorithms that exploit the strangeness and spookiness of quantum mechanics. You will see more and more quantum versions of algorithms that appear in statistics, machine learning, and related fields, as suggested by this survey paper, which also appeared in Nature.

Quantum lite

Sometimes quantum mechanics only loosely inspires algorithms and models. In this setting, some of my machine learning work uses determinantal point processes. This kernel-based random model draws direct inspiration from the wave function, a standard object in quantum mechanics. Under suitable simplifying conditions, the model describes the locations of particles known as fermions such as electrons and protons. Still, it’s fascinating that a quantum physics model inspired an interesting random object that has found applications in spatial statistics and machine learning.

Book manuscript: Random Measures, Point Processes, and Stochastic Geometry

Interested in the theory of point processes? There is the freely available online book manuscript:

  • Baccelli, Błaszczyszyn, and Karray – Random Measures, Point Processes, and Stochastic Geometry.

The authors are established names in the field having written many papers on using point processes to model wireless networks.

(Disclaimer: I have worked and published with them. I have even co-authored a book with one of them.)

Of course there are already books on point process, including the two-volume classic by the Daryl Daley and David Vere-Jones.  Although that work still serves as a good reference, since its publication researchers have produced many new results. This explains the publication of two more recent books on point processes and their generalization random measures:

  • Last and Penrose – Lectures on the Poisson process, Cambridge University Press.
  • Kallenberg – Random Measures, Theory and Applications, Springer.

There’s a free online version of the book by Last and Penrose.

(Disclaimer: Günter Last graciously mentioned my name in the acknowledgements for my very minor contribution to the historical section.)

The manuscript by Baccelli, Błaszczyszyn, and Karray covers newer topics and results by using mathematically rigorous proofs. For example, there are interesting results on determinantal point processes, also called fermion point processes, which started off as a model for subatomic particles in seminal work by Odile Macchi, but they have found applications in wireless network models and machine learning. (I mentioned these later applications in this and this post.)

Despite applications being a strong motivation, the book is not for the faint of hearted in terms of mathematics. In the tradition of French probability, there is plenty of rigour, which means mathematical abstraction, analysis and measure theory.

Update: Mohamed Karray uploaded some lecture slides based on these lectures. Karray is a researcher at Orange Labs in Paris who has research interests in telecommunications (naturally), as well as machine (or statistical) learning.

Further reading

Classic books

Anyone who wants to learn anything about the Poisson point process must consult this gem of mathematical writing:

  • Kingman – Poisson Processes, Oxford Press.

The next volumes were the standard reference on point processes:

  • Daley and Vere-Jones – An Introduction to the Theory of Point Processes: Volume I: Elementary theory and Methods, Springer.
  • Daley and Vere-Jones – An Introduction to the Theory of Point Processes: Volume II: General theory and Structure, Springer.

Unfortunately, some of the methods are dated and, I am told, there some flaws in some of the mathematical proofs. Still, it has packed full of results and references, serving as a good starting point.

Recent books

As I mentioned above, these two recent books cover the modern theory of point processes:

  • Last and Penrose – Lectures on the Poisson process, Cambridge University Press.
  • Kallenberg – Random Measures, Theory and Applications, Springer.

Determinantal point processes

The applications of determinantal point processes stem from their tractable mathematical properties, some of which were proved by researchers Takahashi and Shirai in the two papers:

  • 2003 – Takahasi and Shirai – Random point fields associated with certain Fredholm determinants I: fermion, Poisson and boson point processes, Journal of Functional Analysis.
  • 2003 – Takahasi and Shirai – Random point fields associated with certain Fredholm determinants II: fermion shifts and their ergodic and Gibbs properties, Annals of Applied Analysis.

Another important paper on these point processes is

  • 2006 – Hough, Krishnapur, Peres, Virág – Determinantal processes and independence.

 

Coverage probability in wireless networks with determinantal scheduling

My collaborators and I uploaded a manuscript:

  • Błaszczyszyn, Brochard, and Keeler, Coverage probability in wireless networks with determinantal scheduling.

https://arxiv.org/abs/2006.05038

Details

The paper builds off some previous work by us that uses a (relatively) new model in machine learning:

  • Błaszczyszyn and Keeler, Determinantal thinning of point processes with network learning applications.

https://arxiv.org/abs/1810.08672

The new machine learning model is based on a special type of point process called a determinantal point process. It was originally called a Fermion point process. These are useful point processes as the exhibit certain closure properties under certain operations such independent thinning.

Determinantal point processes are random objects and, when they are properly trained or fitted, can serve as generative artificial intelligence (or Gen AI) models. In fact, you can interpret these point processes as a special Gibbsian point processes, which have at their core a Hamiltonian function. A Hamiltonian is generalized type of energy function, which is also the basis for other generative (or stochastic) models such as Boltzmann machines.

Kulesza and Taskar introduced and developed the framework for using determinantal point processes for machine learning models.

Code

The MATLAB code for the producing the results in the paper can be found here:

https://github.com/hpaulkeeler/detcov_matlab

I also re-wrote the MATLAB code into Python:

https://github.com/hpaulkeeler/detcov_python

Further reading

Our work started with this related paper:

  • Błaszczyszyn and Keeler, Determinantal thinning of point processes with network learning applications.

I wrote about this work briefly in an earlier post.

Parallel (or orthogonal, perhaps) to our work, is this paper:

  • 2019 – Saha and Dhillon – Machine Learning meets Stochastic Geometry: Determinantal Subset Selection for Wireless Networks.

https://arxiv.org/abs/1905.00504

Update: A manuscript extending the above line of research by Saha and Dhillon was uploaded:

  • 2025 – Tu, Saha and Dhillon – Determinantal Learning for Subset Selection in Wireless Networks.

https://arxiv.org/abs/2503.03151

Determinantal thinning of point processes with network learning applications

My colleague and I uploaded a manuscript:

  • Błaszczyszyn and Keeler, Determinantal thinning of point processes with network learning applications.

https://arxiv.org/abs/1810.08672

Details

The paper uses a (relatively) new model framework in machine learning.  This framework is based on a special type of point process called a determinantal point process, which is also called a fermion point process. (This particle model draw inspiration from the form of the wave function in quantum mechanics.) Kulesza and Taskar introduced and developed the framework for using determinantal point processes for machine learning models.

The paper covers a type of generative machine learning (or Gen AI) model, due to its using randomness to create network outcomes. In fact, you can interpret these point processes as a special Gibbsian point processes, which have at their core a Hamiltonian function. A Hamiltonian is generalized type of energy function, which is also the basis for other generative (or stochastic) models such as Boltzmann machines.

The training method for determinantal point processes is based on the maximum likelihood approach, which gives a convex optimization problem, allowing you to tackle it with your favourite gradient method, such as the BFGS or Adam method.

It turns out that point processes are amenable for looking at the signal-to-inference-plus-noise ratio (SINR) — see this post for details. More specifically, we found these point processes give tractable SINR-based expressions when you use (what I call) the standard (SINR) model, which I covered in a previous post.

Code

The MATLAB code for the producing the results in the paper can be found here:

https://github.com/hpaulkeeler/DetPoisson_MATLAB

I also re-wrote (or translated) the MATLAB code into Python:

https://github.com/hpaulkeeler/DetPoisson_Python

Further reading

My collaborators and I later extended this line of work in a separate paper, where we looked at the problem of designing a smart(er) medium access control (MAC) scheduling algorithm:

  • Błaszczyszyn, Brochard, and Keeler, Coverage probability in wireless networks with determinantal scheduling.

https://arxiv.org/abs/2006.05038

I wrote about this work briefly in a later post.

Parallel (or orthogonal, perhaps) to our work, is this paper:

  • 2019 – Saha and Dhillon – Machine Learning meets Stochastic Geometry: Determinantal Subset Selection for Wireless Networks.

https://arxiv.org/abs/1905.00504

Update: A manuscript extending the above line of research by Saha and Dhillon was recently uploaded:

  • 2025 – Tu, Saha and Dhillon – Determinantal Learning for Subset Selection in Wireless Networks.

https://arxiv.org/abs/2503.03151