Research

Profiles

Google scholar profile
arXiv profile
ORCID profile

Code repositories

GitHub profile
MathWorks (MATLAB) profile

Themes

I describe here a couple of the themes that my work has covered.

Statistical learning with determinantal point processes

Determinantal point processes are point processes with repulsion between the points. In other words, the points exhibit negative correlations, so finding a point in one location reduces the chances of finding another nearby. The name of these point processes stems from a mathematical determinant appearing in the mathematical definition. More specifically, a determinant is used to define the factorial moment density or correlation function of a determinantal point process, which can be used to describe probability of finding points in certain locations. Motivated by the study of particles, Macchi originally introduced these point processes to model fermions such as electrons, hence they were first known as fermion processes.

A few years ago mathematicians started deriving interesting mathematical results for determinantal point processes by using tools from probability theory and functional analysis. For example, these point processes exhibit mathematical closure under Palm distributions as well as random (independent) thinning. The point processes can also arise as the eigenvalues of certain random matrices. The mathematical results piqued the interest of others in the mathematical community. Various mathematical results have been found. But these point processes have found applications in other fields such as spatial statistics and statistical or machine learning.

In the context of machine learning, what’s interesting about these point processes is that they are defined with semi-positive kernels. Another random object defined by such kernels are Gaussian processes, which researchers have successfully applied to problems in machine learning. Machine learning researchers Kulesza and Taskar saw the opportunities that such kernel-based point processes offered when defined on discrete mathematical spaces. They introduced and pioneered the use of determinantal point processes for machine learning. The kernels grant one the ability to specify quality and diversity (or negative correlation) of points in some abstract discrete space.

My colleagues and I examined how these point processes can be applied to wireless networks such as (cellular) phone networks. We derived some basic results and used the point processes to model wireless networks and to design a network scheduling algorithm.

Wireless signals appear Poisson

Under the standard statistical propagation model, researchers have mathematically shown that wireless networks can appear (in terms of received signal powers) to any single observer as an inhomogeneous Poisson point process on the real line, provided there are sufficiently random propagation effects, such as multi-path fading or shadowing. In other words, the signal powers form a point process on the positive real line, which is statistically close to a Poisson point process. This has been proven for a quite general propagation model with independent propagation effects and, more recently, correlated shadowing. These results imply, through the Poisson mapping theorem, that network transmitter layouts that do not appear Poisson, such as lattices or other configurations, can be modelled as realizations of Poisson processes, as a Poisson network with matching intensity measure would produce the same inhomogeneous Poisson point process on the positive real line.

The source of randomness that makes a non-Poisson network appear (or behave stochastically) more Poisson is the random propagation effects of fading, shadowing, randomly varying antenna gains, and so on, or some combination of these. These Poisson results were originally derived for independent log-normal shadowing, but then they were greatly extended so they still hold true under a propagation model with a general path loss function and (sufficiently large and independent) propagation effects, such as Rayleigh or Nakagami fading, and this behavior is more likely for the stronger signals. In the case of log-normal shadowing and a power-law path-loss function, this Poisson behaviour still holds true if there is correlation between the random propagation effects. More specifically, signals can still appear Poisson when the propagation effects are modelled with a correlated Gaussian field.

Coverage probability

For mobile (or cellular) phone networks and other wireless networks, a very popular stochastic geometry model consists of base stations (that is, transmitters) as a Poisson process, a simple power law as the path loss function, and iid random variables as the random propagation effects such as fading and shadowing. Borrowing an expression from physics, this Poisson model could be called the “standard model” due to its popular use. Researchers have used it to derive closed-form expressions, sometimes with surprisingly simple forms, for the probability distributions of the signal-to-interference ratio (SIR) in the downlink, which gives the probability of a user being covered in the network. Under various models, these coverage probability expressions have been derived, where the model assumptions depend on considerations such as whether a user connects to the closest base station or the base station with the strongest signal.

Under the standard model, an expression was derived for the probability that a user can connect to k base stations in a single-tier Poisson network. Using point process techniques, these k-coverage results were extended to the case of multi-tier Poisson network models, which are often used to model heterogeneous networks. The results can also be used to calculate the coverage probabilities under certain signal management schemes, such as successive interference cancellation, so generalizing previous results based on point process theory.

The main idea behind the k-coverage results is using the Schuette-Nesbitt formula and recognizing that the symmetric sums are a simple function of the factorial moment measures of the SINR process.

The STIR process is a Poisson-Dirichlet process

Some of the aforementioned work in the engineering literature, namely that of deriving probability expressions for SIR-based coverage, had already been discovered years earlier in the field of probability, which is pointed out in this short article. Under the Poisson network model with (singular) power law path loss function, the SINR process of a single user is a simple transform of the STIR process, which is an example of a Poisson-Dirichlet process.

The two-parameter Poisson-Dirichlet process, now often called the Pitman-Yor process, gives the STIR process when one of the parameters is set to zero. In other words, the STIR process is a special case. Note there are two different point processes both called the Poisson-Dirichlet process. The much more well-known one was introduced by Kingman, which is not the STIR process, and covered in Chapter 9 of his great monograph Poisson Processes. But these two point processes are the two special cases of the Pitman-Yor process.

The above special case of the Poisson-Dirichlet (that is, the STIR) point process is also the same point process used in Ruelle’s energy model, which is studied in relation to the Sherrington-Kirkpatrick spin glass model; see Chapter 2 of Panchenko (Springer Verlag, 2013) or Chapter 4 of Contucci and Giardiná (Cambridge University Press, 2013).

Pitman-Yor and related processes such as the Dirichlet process are used as models in Bayesian statistics. These processes are appealing as such models because they can be represented as stick-breaking models.

Academic background

I originally studied physics and electronic engineering. I completed my PhD in applied probability at the University of Melbourne under the supervision of Peter G. Taylor.