Sparse Spiking Gradient Descent

A preprint caught my eye:

  • 2022 – Perez-Nieves and Goodman – Sparse Spiking Gradient Descent.

I love myself a short title. But you wouldn’t guess from the title that the paper is about artificial neural networks. But not just any type of neural network. Here’s the abstract:

There is an increasing interest in emulating Spiking Neural Networks (SNNs) on neuromorphic computing devices due to their low energy consumption. Recent advances have allowed training SNNs to a point where they start to compete with traditional Artificial Neural Networks (ANNs) in terms of accuracy, while at the same time being energy efficient when run on neuromorphic hardware. However, the process of training SNNs is still based on dense tensor operations originally developed for ANNs which do not leverage the spatiotemporally sparse nature of SNNs. We present here the first sparse SNN backpropagation algorithm which achieves the same or better accuracy as current state of the art methods while being significantly faster and more memory efficient. We show the effectiveness of our method on real datasets of varying complexity (Fashion-MNIST, Neuromophic-MNIST and Spiking Heidelberg Digits) achieving a speedup in the backward pass of up to 150x, and 85% more memory efficient, without losing accuracy.

OK. I’ll bite. So what’s a spiking neural network? And why should we care?

Improving on artificial neural networks

Artificial neural networks get all the glory. They are now everywhere. You can’t open up a newspaper or your laptop without seeing a reference to or being pestered by some agent of artificial intelligence (AI), which usually implies an artificial neural network is working in the background. Despite this, they are far from ideal.

In in some sense, mainstream artificial networks networks are rather brain-lite, as they only draw inspiration from how brains actually function. These statistical models are mostly linear and continuous, which makes them well-behaved mathematically (or algorithmically) speaking.

But in terms of energy and time required for training and using computational tools, the carbon-based grey matter is winning. Kids don’t need to read every book, newspaper and sonnet ever written to master a language. While understanding these words, our brains aren’t boiling volumes of water in excess heat output.

To make such statistical models more brain-heavy, researchers have proposed neural networks that run on spikes, drawing inspiration from the spiky electrical jolts that run through brain neurons. The proposal is something called a spiking neural network, which is also artificial neural networks, but they run on spikes with the aim of being more efficient at learning and doing tasks.

Spiking neural networks are not smooth

The problem is that these spiky models are hard to train (or fit), as their nice behaving property of smoothness vanishes when there’s no continuity. You cannot simply run the forward pass and then backward pass to find the gradients of your neural network model, as you do with regular neural networks when doing backpropagation.

Surrogate gradients

Despite this obstacle, there have been some proposals for training these statistical models.  The training proposals often come down to proposing a continuous function to approximate the actual function being used in the spiking neural networks. The function’s gradients are found using standard methods. We can then use these surrogate gradients to infer in which direction we should move to to better train (or fit) the model.

I know. It sounds like cheating, using one function to guess something about another function. But there has been some success with this approach.

Sparse Spiking Gradients

The training method proposed by Perez-Nieves and Goodman is a type of surrogate method. Using the leaky integrate-and-fire (LIF) model for neuron firing, they develop an approximation for the gradients of their spiking neural network. A key feature of their approach is that, much like our brains, their model is sparse in the sense only a small fraction of neurons are every firing.

Provided the spiking neural network attains a certain degree of sparsity, their sparse spiking gradient descent gives faster, less memory-hungry results.

Perez-Nieves and Goodman support their claims by giving numerical results, which they obtained by running their method on graphical processing units (GPUs). These ferociously fast video game chips have become the standard hardware for doing big number-crunching tasks that are routinely required of working with models in machine learning and artificial intelligence.

Spiking neural networks

To say that artificial neural networks are having their day in the sun is an understatement. Commonly called just neural networks, you can’t glance at a news source without seeing some reference to artificial intelligence (AI), a term that has become synonymous with the statistical models known as artificial neural networks.

(I hasten to add that neural networks and artificial intelligence are distinct things.  Originally, the latter was a target, in the same way that 5G is a target for telecommunications, which you can reach using whichever means that allow you. Someone may say they are using artificial intelligence, but the models in use may be other statistical or machine learning models, of which there are plenty. At any rate, it now seems artificial neural networks are ahead of the pack.)

From performing language and image tasks to generating and composing artistic (or stochastic?) works, neural networks are impressive in what they can do.

But can these statistical models be better? And so they really work like the brain?

Brain-lite neural networks

Artificial neural networks, dreamt up in the 1940s and 1950s, were inspired by how brains work. But they only grew inspiration from the firing of neurons in brains. These statistical models behave differently to the grey matter that forms our brains. And we can see that by how differently they perform in terms of learning speed and energy usage.

Artificial neural networks require a lot of material to learn anything. The amount of data that is needed to teach or train a state-of-the artificial neural network to learn, say, basic natural language skills is equivalent to a human spending thousands of years being exposed to language. Clearly we are the faster learners.

These statistical models also consume vast amounts of energy for training and running. Our brains just fire along, doing incredibly impressive and diverse things day after day, while using energy at a rate too low to boil a cup of water.

So the question arises: Why don’t we make artificial neural networks more like our brains? Well, historically, the short answer is that these statistical models are nice.

What’s nice about artificial neural networks?

Typically artificial neural networks are specifically built to have two convenient properties. Linearity and continuity.

Linearity

Mathematically, linearity is great because it means you can pull things apart, do stuff, and then put it together without losing or gaining anything. In short, things add up. Matrices are linear operators, and neural networks are essentially just a series of very large matrices coupled together.

Continuity

Continuity is also a wonderfully tractable property. It means that if you adjust  something slightly, then the degree of change will be limited, well, continuous. There will be no jumps or gaps. Things will run smoothly. That means you can find gradients (or derivatives) of these models by performing, say, backpropagation. And that in turn allows the use of optimization functions, which often use gradients to take educated gaps in which way to walk to find maximum or minimum points of continuous functions.

How neurons fire

In the brains, neurons do fire not continuously, but in so-called spikes. In other words, the electrical currents coursing through brain neurons are not steady flow, but consist of series of current spikes. So we see that any continuity assumption in any model of the brain does not reflect reality.

The electrical workings of neurons have motivated researchers to propose  mathematical models for describing the flow of currents in the brain. One historically significant model is the Hodgkin-Huxeley model, whose proposal led to its developers to share the Nobel Prize in Medicine. This model gives us a not-so-nice nonlinear set of differential equations, which you may encounter in a course on solving such differential equations numerically.

The Hodgkin-Huxeley model is a complex model, perhaps needlessly complex in certain cases. Consequently, recalling the law of parsimony in statistics (which artificial neural networks routinely laugh at), researchers, often use other simpler models, such as the leaky integrate-and-fire model.

Brain-heavy neural networks

A relatively small number of researchers are working away at creating neural networks that more closely resemble brains. Much of these models hinge upon using spikes, giving rise to statistical models called spiking neural networks.

These neural networks are, strictly speaking, still artificial, being closely related to recurrent neural networks, a popular type of artificial neural network. But the aim is that these statistical models more closely resemble how brains do their magic.

Where are they?

The big problem of spiking neural networks is training (or fitting) them. The inherent lack of continuity in these networks means you can’t use your favourite gradient-based methods.  Unlike regular network models, that means you can’t do the forward pass, collecting key values along the way, and then do the backward pass, yielding the gradients.

In lieu of this, researchers have proposed other methods, such as approximating the discontinuous functions in spiking neural networks with continuous ones, but then you can find the gradients of these functions, giving you surrogate derivatives, and use them to train the model.

It sounds not quite legit. Using one function to infer something about another function. But some papers seems to suggest there is promise in this approach.

Still, spiking neural networks are a long way from the fame that their continuous cousins enjoy.

Software and hardware

So far I have only covered the model or theory aspects of this statistical model seeking to replicate the brain’s performance. You can liken that to just talking about the software. That’s my leaning, but idea of creating more brainy neural networks has led to hardware too.

Some companies, such as Intel, are now designing and manufacturing computer processors or chips that seek to mimic how the brain works. But the challenge remains in reconciling software and hardware. That’s because proposed software methods may not be suitable for hardware being designed and made.

Further reading

There are some survey papers on this topic, including:

Update: Here’s a more recent survey paper: