In my internet wanderings, I stumbled upon this blog:
The writer, Tae-Danae Bradley, wrote a PhD applying category theory (a field of mathematics that strives for abstraction) to problems in machine learning.
In my internet wanderings, I stumbled upon this blog:
The writer, Tae-Danae Bradley, wrote a PhD applying category theory (a field of mathematics that strives for abstraction) to problems in machine learning.
I came across this blog:
http://gregorygundersen.com/blog
The writer now focuses mostly on financial models and techniques, but earlier posts cover topics in probability and statistics.
This probability blog came up in my news feed:
It seems to focus on stochastic processes such as Brownian motion and friends.
I have come across posts on this blog at least three or four times:
https://xianblog.wordpress.com/
It happens, I later discovered, to be maintained by Christian P. Robert, a senior research figure in Markov chain Monte Carlo methods and Bayesian statistics.
Want to learn some deep learning? I recommend the Dataflowr website:
https://dataflowr.github.io/website/
It’s a good resource for learning the basics of neural networks using the PyTorch library in Python. The focus is on writing and running code. You can even play around with GPUs (graphical processing units) by running them on Google’s Colab, though that’s usually not needed.
It’s mostly run by a researcher who is a former colleague of mine and, while I was at Inria, was indirectly the reason I started using PyTorch for my machine learning work.
When researching topics for my work (and for posts), I sometimes stumble upon the same blog more than once for different reasons. One of these is this one:
http://extremelearning.com.au/
It’s run by a Tasmanian physicist turned data scientist. Topics include quasi-random sequences, the Fisher-Yates sampling algorithm, and sampling points uniformly on a triangle.
Update: A post on the multi-arm bandit problem, which is a prototypical problem in reinforcement learning.