Interested in the theory of point processes? There is the freely available online book manuscript:
Baccelli, Błaszczyszyn, and Karray – Random Measures, Point Processes, and Stochastic Geometry.
The authors are established names in the field having written many papers on using point processes to model wireless networks.
(Disclaimer: I have worked and published with them. I have even co-authored a book with one of them.)
Of course there are already books on point process, including the two-volume classic by the Daryl Daley and David Vere-Jones. Although that work still serves as a good reference, since its publication researchers have produced many new results. This explains the publication of two more recent books on point processes and their generalization random measures:
Last and Penrose – Lectures on the Poisson process, Cambridge University Press.
Kallenberg – Random Measures, Theory and Applications, Springer.
There’s a free online version of the book by Last and Penrose.
(Disclaimer: Günter Last graciously mentioned my name in the acknowledgements for my very minor contribution to the historical section.)
The manuscript by Baccelli, Błaszczyszyn, and Karray covers newer topics and results by using mathematically rigorous proofs. For example, there are interesting results on determinantal point processes, also called fermion point processes, which started off as a model for subatomic particles in seminal work by Odile Macchi, but they have found applications in wireless network models and machine learning. (I mentioned these later applications in this and this post.)
Despite applications being a strong motivation, the book is not for the faint of hearted in terms of mathematics. In the tradition of French probability, there is plenty of rigour, which means mathematical abstraction, analysis and measure theory.
Update: Mohamed Karray uploaded some lecture slides based on these lectures. Karray is a researcher at Orange Labs in Paris who has research interests in telecommunications (naturally), as well as machine (or statistical) learning.
Further reading
Classic books
Anyone who wants to learn anything about the Poisson point process must consult this gem of mathematical writing:
Kingman – Poisson Processes, Oxford Press.
The next volumes were the standard reference on point processes:
Daley and Vere-Jones – An Introduction to the Theory of Point Processes: Volume I: Elementary theory and Methods, Springer.
Daley and Vere-Jones – An Introduction to the Theory of Point Processes: Volume II: General theory and Structure, Springer.
Unfortunately, some of the methods are dated and, I am told, there some flaws in some of the mathematical proofs. Still, it has packed full of results and references, serving as a good starting point.
Recent books
As I mentioned above, these two recent books cover the modern theory of point processes:
Last and Penrose – Lectures on the Poisson process, Cambridge University Press.
Kallenberg – Random Measures, Theory and Applications, Springer.
Determinantal point processes
The applications of determinantal point processes stem from their tractable mathematical properties, some of which were proved by researchers Takahashi and Shirai in the two papers:
2003 – Takahasi and Shirai – Random point fields associated with certain Fredholm determinants I: fermion, Poisson and boson point processes, Journal of Functional Analysis.
2003 – Takahasi and Shirai – Random point fields associated with certain Fredholm determinants II: fermion shifts and their ergodic and Gibbs properties, Annals of Applied Analysis.
Another important paper on these point processes is
The Fields Medal is a prize in mathematics awarded every four years to two to four outstanding researchers (forty years old or younger) working in mathematics. One of the medals this year was awarded to French mathematician Hugo Duminil-Copin who has solved problems and obtained new results in the percolation theory which lies in the intersection of probability and statistical physics. Here’s a good Quanta article on Duminil-Copin and some of his work.
The Fields Medal people has been kind to probability researchers in recent years. Previous winners working in probability have included Wendelin Werner (2006), Stanislav Smirov (2010), and Martin Hairer (2014), while other winners in recent years have also made contributions to probability.
All in all, that’s not too shabby for a discipline that for a long, long time wasn’t considered part of mathematics. (That story deserves a post on its own.)
I work nowhere near Duminil-Copin, but I have used some percolation theory in my work. I will write a couple of posts on percolation theory. Eventually, I may even mention some recent work that my collaborators and I have been working on.
The PyTorch website reveals that I can now do accelerated model training with PyTorch on my Mac (which has one of the new Silicon M series chips):
In collaboration with the Metal engineering team at Apple, we are excited to announce support for GPU-accelerated PyTorch training on Mac. Until now, PyTorch training on Mac only leveraged the CPU, but with the upcoming PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training. This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac.
As the website says, this new PyTorch acceleration capability is due to Metal, which is Apple’s graphics processing (or GPU) technology.
This means I no longer need to use remote machines when training models. I can do it all on my trusty home computer. Let’s check.
The easiest way to check is, if you haven’t already, first install PyTorch. I would use Anaconda:
conda install pytorch torchvision -c pytorch
With PyTorch installed, run Python and then run the commands:
If the last commands works without a hitch, you have PyTorch installed. The last two commands should return True, meaning that PyTorch can use your graphics card for (accelerated) calculations.
I have come across posts on this blog at least three or four times:
https://xianblog.wordpress.com/
It happens, I later discovered, to be maintained by Christian P. Robert, a senior research figure in Markov chain Monte Carlo methods and Bayesian statistics.
It’s a good resource for learning the basics of neural networks using the PyTorch library in Python. The focus is on writing and running code. You can even play around with GPUs (graphical processing units) by running them on Google’s Colab, though that’s usually not needed.
It’s mostly run by a researcher who is a former colleague of mine and, while I was at Inria, was indirectly the reason I started using PyTorch for my machine learning work.
When researching topics for my work (and for posts), I sometimes stumble upon the same blog more than once for different reasons. One of these is this one:
2022 – Perez-Nieves and Goodman – Sparse Spiking Gradient Descent.
I love myself a short title. But you wouldn’t guess from the title that the paper is about artificial neural networks. But not just any type of neural network. Here’s the abstract:
There is an increasing interest in emulating Spiking Neural Networks (SNNs) on neuromorphic computing devices due to their low energy consumption. Recent advances have allowed training SNNs to a point where they start to compete with traditional Artificial Neural Networks (ANNs) in terms of accuracy, while at the same time being energy efficient when run on neuromorphic hardware. However, the process of training SNNs is still based on dense tensor operations originally developed for ANNs which do not leverage the spatiotemporally sparse nature of SNNs. We present here the first sparse SNN backpropagation algorithm which achieves the same or better accuracy as current state of the art methods while being significantly faster and more memory efficient. We show the effectiveness of our method on real datasets of varying complexity (Fashion-MNIST, Neuromophic-MNIST and Spiking Heidelberg Digits) achieving a speedup in the backward pass of up to 150x, and 85% more memory efficient, without losing accuracy.
Artificial neural networks get all the glory. They are now everywhere. You can’t open up a newspaper or your laptop without seeing a reference to or being pestered by some agent of artificial intelligence (AI), which usually implies an artificial neural network is working in the background. Despite this, they are far from ideal.
In in some sense, mainstream artificial networks networks are rather brain-lite, as they only draw inspiration from how brains actually function. These statistical models are mostly linear and continuous, which makes them well-behaved mathematically (or algorithmically) speaking.
But in terms of energy and time required for training and using computational tools, the carbon-based grey matter is winning. Kids don’t need to read every book, newspaper and sonnet ever written to master a language. While understanding these words, our brains aren’t boiling volumes of water in excess heat output.
To make such statistical models more brain-heavy, researchers have proposed neural networks that run on spikes, drawing inspiration from the spiky electrical jolts that run through brain neurons. The proposal is something called a spiking neural network, which is also artificial neural networks, but they run on spikes with the aim of being more efficient at learning and doing tasks.
Spiking neural networks are not smooth
The problem is that these spiky models are hard to train (or fit), as their nice behaving property of smoothness vanishes when there’s no continuity. You cannot simply run the forward pass and then backward pass to find the gradients of your neural network model, as you do with regular neural networks when doing backpropagation.
Surrogate gradients
Despite this obstacle, there have been some proposals for training these statistical models. The training proposals often come down to proposing a continuous function to approximate the actual function being used in the spiking neural networks. The function’s gradients are found using standard methods. We can then use these surrogate gradients to infer in which direction we should move to to better train (or fit) the model.
I know. It sounds like cheating, using one function to guess something about another function. But there has been some success with this approach.
Sparse Spiking Gradients
The training method proposed by Perez-Nieves and Goodman is a type of surrogate method. Using the leaky integrate-and-fire (LIF) model for neuron firing, they develop an approximation for the gradients of their spiking neural network. A key feature of their approach is that, much like our brains, their model is sparse in the sense only a small fraction of neurons are every firing.
Provided the spiking neural network attains a certain degree of sparsity, their sparse spiking gradient descent gives faster, less memory-hungry results.
Perez-Nieves and Goodman support their claims by giving numerical results, which they obtained by running their method on graphical processing units (GPUs). These ferociously fast video game chips have become the standard hardware for doing big number-crunching tasks that are routinely required of working with models in machine learning and artificial intelligence.
One of the most important stochastic processes is Poisson stochastic process, often called simply the Poisson process. In a previous post I gave the definition of a stochastic process (also called a random process) alongside some examples of this important random object, including counting processes. The Poisson (stochastic) process is a counting process. This continuous-time stochastic process is a highly studied and used object. It plays a key role in different probability fields, particularly those focused on stochastic processes such as stochastic calculus (with jumps) and the theories of Markov processes, queueing, point processes (on the real line), and Levy processes.
The points in time when a Poisson stochastic process increases form a Poisson point process on the real line. In this setting the stochastic process and the point process can be considered two interpretations of the same random object. The Poisson point process is often just called the Poisson process, but a Poisson point process can be defined on more generals spaces. In some literature, such as the theory of Lévy processes, a Poisson point process is called a Poisson random measure, differentiating the Poisson point process from the Poisson stochastic process. Due to the connection with the Poisson distribution, the two mathematical objects are named after Simeon Poisson, but he never studied these random objects.
The other important stochastic process is the Wiener process or Brownian (motion process), which I cover in another post. The Wiener process is arguably the most important stochastic process. I have written that post and the current one with the same structure and style, reflecting and emphasizing the similarities between these two fundamental stochastic process.
In this post I will give a definition of the homogenous Poisson process. I will also describe some of its key properties and importance. In future posts I will cover the history and generalizations of this stochastic process.
Definition
In the stochastic processes literature there are different definitions of the Poisson process. These depend on the settings such as the level of mathematical rigour. I give a mathematical definition which captures the main characteristics of this stochastic process.
Definition: Homogeneous Poisson (stochastic) process
An integer-valued stochastic process \(\{N_t:t\geq 0 \}\) defined on a probability space \((\Omega,\mathcal{A},\mathbb{P})\) is a homogeneous Poisson (stochastic) process if it has the following properties:
The initial value of the stochastic process \(\{N_t:t\geq 0 \}\) is zero with probability one, meaning \(P(N_0=0)=1\).
The increment \(N_t-N_s\) is independent of the past, that is, \(N_u\), where \(0\leq u\leq s\).
The increment \(N_t-N_s\) is a Poisson variable with mean \(\lambda (t-s)\).
In some literature, the initial value of the stochastic process may not be given. Alternatively, it is simply stated as \(N_0=0\) instead of the more precise (probabilistic) statement given above.
Also, some definitions of this stochastic process include an extra property or two. For example, from the above definition, we can infer that increments of the homogeneous Poisson process are stationary due to the properties of the Poisson distribution. But a definition may include something like the following property, which explicitly states that this stochastic process is stationary.
For \(0\leq u\leq s\), the increment \(N_t-N_s\) is equal in distribution to \(N_{t-s}\).
The definitions may also describe the continuity of the realizations of the stochastic process, known as sample paths, which we will cover in the next section.
It’s interesting to compare these defining properties with the corresponding ones of the standard Wiener stochastic process. Both stochastic processes build upon divisible probability distributions. Using this property, Lévy processes generalize these two stochastic processes.
Properties
The definition of the Poisson (stochastic) process means that it has stationary and independent increments. These are arguably the most important properties as they lead to the great tractability of this stochastic process. The increments are Poisson random variables, implying they can have only positive (integer) values.
The Poisson (stochastic) process exhibits closure properties, meaning you apply certain operations, you get another Poisson (stochastic) process. For example, if we sum two independent Poisson processes \(X= \{X_t:t\geq 0 \}\) and \(Y= \{Y_t:t\geq 0 \}\), then the resulting stochastic process \(Z=Z+Y = \{N_t:t\geq 0 \}\) is also a Poisson (stochastic) process. Such properties are useful for proving mathematical results.
A single realization of a (homogeneous) Poisson stochastic process, where the blue marks show where the process jumps to the next value. In any finite time interval, there are a finite number of jumps.
Properties such as independence and stationarity of the increments are so-called distributional properties. But the sample paths of this stochastic process are also interesting. A sample path of a Poisson stochastic process is almost surely non-decreasing, being constant except for jumps of size one. (The term almost surely comes from measure theory, but it means with probability one.) There are only finitely number of jumps in each finite time interval.
The homogeneous Poisson (stochastic) process has the Markov property, making it an example of a Markov process. The homogenous Poisson process \(N=\{ N_t\}_{t\geq 0}\)s not a martingale. But interestingly, the stochastic process is \(\{ W_t – \lambda t\}_{t\geq 0}\) is a martingale. (Such relations have been used to study such stochastic processes with tools from martingale theory.)
Stochastic or point process?
The Poisson (stochastic) process is a discrete-valued stochastic process in continuous time. The relation these types of stochastic processes and point process is a subtle one. For example, David Cox and Valerie Isham write on page 3 of their monograph:
The borderline between point processes and a number of other kinds of stochastic process is not sharply defined. In particular, any stochastic process in continuous time in which the sample paths are step functions, and therefore any any process with a discrete state space, is associated with a point process, where a point is a time of transition or, more generally, a time of entry into a pre-assigned state or set of states. Whether it is useful to look at a particular process in this way depends on the purpose of the analysis.
For the Poisson case, this association is presented in the diagram below. We can see the Poisson point process (in red) associated with the Poisson (stochastic) process (in blue) by simply looking at the time points where jumps occur.
A single realization of a (homogeneous) Poisson stochastic process (in blue). The jumps of the process form a (homogeneous) Poisson point process (in red) on the real line representing time.
Importance
Playing a prominent role in the theory of probability, the Poisson (stochastic) process is a highly important and studied stochastic process. It has connections to other stochastic processes and is central in queueing theory and random measures.
The Poisson process is a building block for more complex continuous-time Markov processes with discrete state spaces, which are used as mathematical models. It is also essential in the study of jump processes and subordinators.
The Poisson (stochastic) process is a member of some important families of stochastic processes, including Markov processes, Lévy processes, and birth-death processes. This stochastic process also has many applications. For example, it plays a central role in quantitative finance. It is also used in the physical sciences as well as some branches of social sciences, as a mathematical model for various random phenomena.
Generalizations and modifications
For the Poisson (stochastic) process, the index set and state space are respectively the non-negative numbers and counting numbers, that is \(T=[0,\infty)\) and \(S=0, 1, \dots\), so it has a continuous index set but a discrete state space. Consequently, changing the state space, index set, or both offers an ways for generalizing and modifying the Poisson (stochastic) process.
Simulation
The defining properties of the Poisson stochastic process, namely independence and stationarity of increments, results in it being easy to simulate. The Poisson stochastic process can be simulated provided random variables can be simulated or sampled according to a Poisson distributions, which I have covered in this and this post.
Simulating a Poisson stochastic process is similar to simulating a Poisson point process. (Basically, it is the same method in a one-dimensional setting.) But I will leave the details of sampling this stochastic process for another post.
A very quick history of Wiener process and the Poisson (point and stochastic) process is covered in this talk by me.
In terms of books, the Poisson process has not received as much attention as the Wiener process, which is typically just called the Brownian (motion) process. That said, any book covering queueing theory will cover the Poisson (stochastic) process.
More advanced readers can read about the Poisson (stochastic) process, the Wiener (or Brownian (motion)) process, and other Lévy processes:
One of the most important stochastic processes is the Wiener process or Brownian (motion) process. In a previous post I gave the definition of a stochastic process (also called a random process) with some examples of this important random object, including random walks. The Wiener process can be considered a continuous version of the simple random walk. This continuous-time stochastic process is a highly studied and used object. It plays a key role different probability fields, particularly those focused on stochastic processes such as stochastic calculus and the theories of Markov processes, martingales, Gaussian processes, and Levy processes.
The Wiener process is named after Norbert Wiener, but it is called the Brownian motion process or often just Brownian motion due to its historical connection as a model for Brownian movement in liquids, a physical phenomenon observed by Robert Brown. But the physical process is not true a Wiener process, which can be treated as an idealized model. I will use the terms Wiener process or Brownian (motion) process to differentiate the stochastic process from the physical phenomenon known as Brownian movement or Brownian process.
The Wiener process is arguably the most important stochastic process. The other important stochastic process is the Poisson (stochastic) process, which I cover in another post. I have written that and the current post with the same structure and style, reflecting and emphasizing the similarities between these two fundamental stochastic process.
In this post I will give a definition of the standard Wiener process. I will also describe some of its key properties and importance. In future posts I will cover the history and generalizations of this stochastic process.
Definition
In the stochastic processes literature there are different definitions of the Wiener process. These depend on the settings such as the level of mathematical rigour. I give a mathematical definition which captures the main characteristics of this stochastic process.
Definition: Standard Wiener or Brownian (motion) process
A real-valued stochastic process \(\{W_t:t\geq 0 \}\) defined on a probability space \((\Omega,\mathcal{A},\mathbb{P})\) is a standard Wiener (or Brownian motion) process if it has the following properties:
The initial value of the stochastic process \(\{W_t:t\geq 0 \}\) is zero with probability one, meaning \(P(W_0=0)=1\).
The increment \(W_t-W_s\) is independent of the past, that is, \(W_u\), where \(0\leq u\leq s\).
The increment \(W_t-W_s\) is a normal variable with mean \(o\) and variance \(t-s\).
In some literature, the initial value of the stochastic process may not be given. Alternatively, it is simply stated as \(W_0=0\) instead of the more precise (probabilistic) statement given above.
Also, some definitions of this stochastic process include an extra property or two. For example, from the above definition, we can infer that increments of the standard Wiener process are stationary due to the properties of the normal distribution. But a definition may include something like the following property, which explicitly states that this stochastic process is stationary.
For \(0\leq u\leq s\), the increment \(W_t-W_s\) is equal in distribution to \(W_{t-s}\).
The definitions may also describe the continuity of the realizations of the stochastic process, known as sample paths, which we will cover in the next section.
It’s interesting to compare these defining properties with the corresponding ones of the homogeneous Poisson stochastic process. Both stochastic processes build upon divisible probability distributions. Using this property, Lévy processes generalize these two stochastic processes.
Properties
The definition of the Wiener process means that it has stationary and independent increments. These are arguably the most important properties as they lead to the great tractability of this stochastic process. The increments are normal random variables, implying they can have both positive and negative (real) values.
The Wiener process exhibits closure properties, meaning you apply certain operations, you get another Wiener process. For example, if \(W= \{W_t:t\geq 0 \}\) is a Wiener process, then for a scaling constant \(c>0\), the resulting stochastic process \(\{W_{ct}/\sqrt{c}:t \geq 0 \}\)is also a Wiener process. Such properties are useful for proving mathematical results.
Two realizations of a Wiener (or Brownian motion) process. The sample paths are continuous (but non-differentiable) almost everywhere.
Properties such as independence and stationarity of the increments are so-called distributional properties. But the sample paths of this stochastic process are also interesting. A sample path of a Wiener process is continuous almost everywhere. (The term almost everywhere comes from measure theory, but it simply means that the only region where the property does not hold is mathematically negligible.) Despite the continuity of the sample paths, they are nowhere differentiable. (Historically, it was a challenge to find such a function, but a classic example is the Weierstrass function.)
The standard Wiener process has the Markov property, making it an example of a Markov process. The standard Wiener process \(W=\{ W_t\}_{t\geq 0}\) is a martingale. Interestingly, the stochastic process \(W=\{ W_t^2-t\}_{t\geq 0}\) is also a martingale. The Wiener process is a fundamental object in martingale theory.
There are many other properties of the Brownian motion process; see the Further reading section for, well, further reading.
Importance
Playing a main role in the theory of probability, the Wiener process is considered the most important and studied stochastic process. It has connections to other stochastic processes and is central in stochastic calculus and martingales. Its discovery led to the development to a family of Markov processes known as diffusion processes.
The Wiener process also arises as the mathematical limit of other stochastic processes such as random walks, which is the subject of Donsker’s theorem or invariance principle, also known as the functional central limit theorem.
The Wiener process is a member of some important families of stochastic processes, including Markov processes, Lévy processes, and Gaussian processes. This stochastic process also has many applications. For example, it plays a central role in quantitative finance. It is also used in the physical sciences as well as some branches of social sciences, as a mathematical model for various random phenomena.
Generalizations and modifications
For the Brownian motion process, the index set and state space are respectively the non-negative numbers and real numbers, that is \(T=[0,\infty)\) and \(S=[0,\infty)\), so it has both continuous index set and state space. Consequently, changing the state space, index set, or both offers an ways for generalizing or modifying the Wiener (stochastic) process.
A single realization of a two-dimensional Wiener (or Brownian motion) process. Each vector component is an independent standard Wiener process.
Simulating
The defining properties of the Wiener process, namely independence and stationarity of increments, results in it being easy to simulate. The Wiener can be simulated provided random variables can be simulated or sampled according to a normal distribution. The main challenge is that the Wiener process is a continuous-time stochastic process, but computer simulations run in a discrete universe.
I will leave the details of sampling this stochastic process for another post.
Further reading
A very quick history of Wiener process and the Poisson (point) process is covered in this talk by me.
There are books almost entirely dedicated to the subject of the Wiener or Brownian (motion) process, including:
More advanced readers can read about the Wiener process, its descrete-valued cousin, the Poisson (stochastic) process, as well as other Lévy processes: