Mark van der Wilk

Mark van der Wilk

Lecturer in Machine Learning

Imperial College London

Research

I am a lecturer (assistant professor) in the Department of Computing at Imperial College London. Together with my research group, I work on machine learning, which aims to automatically find solutions to problems by gaining experience (data) from interacting with the environment. I aim to improve three properties of ML methods:

  • Automatic Machine Learning: Methods are too brittle, and require human design and oversight.
  • Data efficiency: Making better predictions with less data.
  • Uncertainty quantification and decision making: Uncertainty tells us when risks are worth taking.

Improvements in these areas would benefit the full spectrum of applications: from tasks with small amounts of data where eliminating noise is important, to high-dimensional large-data settings where neural networks are applied to natural data like images. To tackle these problems, I apply strong principles from statistics to machine learning models. Doing so requires developing accurate approximations to the statistical methods, as the exact computations are impossible to do. In terms of methods, I mainly work on Gaussian processes and neural networks (and the strong links between them).

My work has been presented at the leading machine learning conferences (NeurIPS and ICML), and includes a best paper award. Personally, I’m currently enthusiastic about our paper on learning what invariance should be used as an inductive bias for a dataset of interest.

See my Research Overview page for more details on my research interests.

Academic or Industrial Collaborations

I am also interested in applied problems, and am keen to collaborate. While my research overview gives a more complete picture of topics, I wanted to give a special mention to problems where 1) signal needs to be distinguished from noise, 2) knowledge needs to be encoded into the model, or 3) data is scarce, or needs to be acquired intelligently. Tools like (deep) Gaussian processes can make a difference here, and recent developments have provided new capabilities of dealing with higher-dimensional inputs or large datasets. Current ongoing collaborations include tailored Bayesian optimisation models for biomolecular design or optimisation of chemical processes.

If you have a problem that fits these descriptions, please do get in touch. Collaborations can range from publishing case-studies or datasets which can serve as a community benchmark, to consulting, to methods research.

Working with me

I will have spaces for a small number (~2) of PhD students a year in the next few years. I am looking for people with a strong academic background (particularly strong mathematical skills) who are keen to work on topics that are aligned with my interests. I have written up some tips and guidelines for applying, which I recommend you read before getting in touch or submitting your application.

About

Before starting at Imperial, I worked with Dr. James Hensman for two years as a machine learning researcher at Secondmind, a research-led startup aiming to solve a wide variety of decision making problems. I did my PhD in the Machine Learning Group at the University of Cambridge, working with Prof. Carl Rasmussen, and completing my thesis in 2017. I was funded by the EPSRC and awarded a Qualcomm Innovation Fellowship for my final year. During my PhD, I occasionally worked as a machine learning consultant, and I also spent a few months as a visiting researcher at Google in Mountain View, CA. I moved to the UK from the Netherlands for my undergraduate degree in Engineering.

Recent Publications

(2022). Improved Inverse-Free Variational Bounds for Sparse Gaussian Processes. Fourth Symposium on Advances in Approximate Bayesian Inference.

Cite

(2022). Matrix Inversion free variational inference in Conditional Student's T Processes. Fourth Symposium on Advances in Approximate Bayesian Inference.

Cite

(2022). Bayesian Neural Network Priors Revisited. The Tenth International Conference on Learning Representations (ICLR).

PDF Cite

(2022). Last Layer Marginal Likelihood for Invariance Learning. Proceedings of the Twenty Fifth International Conference on Artificial Intelligence and Statistics (AISTATS).

PDF Cite

(2021). Correlated Weights in Infinite Limits of Deep Convolutional Neural Networks. Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence (UAI).

PDF Cite