In January 2020 I joined the Department of Computing at Imperial College London as a lecturer (assistant professor). My research works towards developing systems which learn to accomplish tasks through experience from interacting with the environment, with as little experience as possible. I aim to let the underlying principles of inference and learning guide my work, both from the point of view of developing practical methods from first principles, and from finding underlying principles in existing methods.

My research is motivated by **reinforcement learning** methods which use explicit **predictive models** of the world to plan behaviour. This approach improves **data efficiency**, as knowledge about the world generalises strongly to new situations. Learning good models of the world, with a reliable estimate of their own **uncertainty**, is crucial to the success of these methods. In addition, they need to be **automatic**, in the sense that they should not rely on human design or intervention as they learn.

Currently, the main component of my research is building better predictive models. In reinforcement learning / decision making applications, we require **a)** uncertainty estimates, for avoiding or taking calculated risks, and **b)** automatic adaptation with increasing data, as more experience is gained. **Bayesian inference** provides an elegant framework for representing uncertainty, and automating many aspects of the modelling process.
Currently, I am interested in bringing the benefits of Bayesian inference to deep learning models, using **Gaussian processes** as a building block.

My work has been presented at the leading machine learning conferences (NeurIPS and ICML), including an oral presentation and a best paper award. Personally, I’m currently enthusiastic about our paper on learning what invariance should be used as an inductive bias for a particular dataset.

I will have spaces for a small number (~2) of PhD students a year in the next few years. I am looking for people with a strong academic background (particularly strong mathematical skills) who are keen to work on topics that are aligned with my interests (see below).

A strong mathematical background is usually demonstrated by a first-class (or equivalent) degree in information or electrical engineering, physics, maths, or computer science. A background in e.g. linear algebra, probability, statistics, and optimisation are particularly important. You can demonstrate alignment with my research interests with a short research statement that outlines **1)** what problem you are interested in, **2)** why this problem is interesting or important, and **3)** what techniques you think will be useful or necessary for reaching your goals.

- Bayesian inference and approximations to it (variational inference, EP, MCMC, …).
- Gaussian process models (deep GPs, GPSSM, GPLVM, …) or their theoretical properties.
- Bayesian deep learning (inference over weights, using GPs as building blocks, …).
- Neural networks / other models with invariance properties (e.g. rotation, scale, or more arbitrary) and learning invariances.
- Analysis of deep neural networks (infinite limits and GP relations).
- Model-based reinforcement learning.
- Differentially private machine learning.
- Connections between Bayesian inference and generalisation error bounds.

Before starting at Imperial, I worked with James Hensman for two years as a machine learning researcher at Secondmind, a research-led startup aiming to solve a wide variety of decision making problems. I did my PhD in the Machine Learning Group at the University of Cambridge, working with Carl Rasmussen, and completing my thesis in 2017. I was funded by the EPSRC and awarded a Qualcomm Innovation Fellowship for my final year. During my PhD, I occasionally worked as a machine learning consultant, and I also spent a few months as a visiting researcher at Google in Mountain View, CA. I moved to the UK from the Netherlands for my undergraduate degree in Engineering at Jesus College, University of Cambridge.

Correlated Weights in Infinite Limits of Deep Convolutional Neural Networks.
*Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence (UAI)*.

(2021).
The Promises and Pitfalls of Deep Kernel Learning.
*Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence (UAI)*.

(2021).
Tighter Bounds on the Log Marginal Likelihood of Gaussian Process Regression Using Conjugate Gradients.
*Proceedings of the 38th International Conference on Machine Learning (ICML)*.

(2021).
(2021).
(2021).