1

A Bayesian Perspective on Training Speed and Model Selection

Stochastic Segmentation Networks: Modelling Spatially Correlated Aleatoric Uncertainty

Variational Gaussian Process Models without Matrix Inverses

In this work, we provide a variational lower bound that can be computed without expensive matrix operations like inversion. Our bound can be used as a drop-in replacement to the existing variational method of Hensman et al. (2013, 2015), and can …

Overcoming Mean-Field Approximations in Recurrent Gaussian Process Models

We identify a new variational inference scheme for dynamical systems whose transition function is modelled by a Gaussian process. Inference in this setting has either employed computationally intensive MCMC methods, or relied on factorisations of the …

Rates of Convergence for Sparse Variational Gaussian Process Regression

Excellent variational approximations to Gaussian process posteriors have been developed which avoid the $mathcalOłeft(N^3i̊ght)$ scaling with dataset size $N$. They reduce the computational cost to $mathcalOłeft(NM^2g̊ht)$, with $Młl N$ the number of …

Bayesian Layers: A Module for Neural Network Uncertainty

Scalable Bayesian dynamic covariance modeling with variational Wishart and inverse Wishart processes

Learning Invariances using the Marginal Likelihood

Concrete problems for autonomous vehicle safety: advantages of Bayesian deep learning

Convolutional Gaussian Processes