Invariances in Gaussian processes and How to Learn Them

Abstract

When learning mappings from data, knowledge about what modifications to the input leave the output unchanged can strongly improve generalisation. Exploiting these invariances is commonplace in many machine learning models, under the guise of convolutional structure or data augmentation. Choosing which invariances to use, however, is still done with humans in the loop, through trial-and-error and crossvalidation. In this talk, we will discuss how Gaussian processes can be constrained to exhibit invariances, and how this is useful for various applications. We will also show how invariances can be learned with backpropagation using tools from Bayesian model selection.

Date
Sep 12, 2019 1:00 PM
Location
Gaussian Process Summer School, University of Sheffield, UK