The Bayesian Deep Learning community is widely known for its efforts in bringing uncertainty estimates to deep neural networks. However, Bayesian methods have another key advantage: the ability to adjust inductive biases through model selection. Interestingly, model selection and uncertainty estimation are dual problems in the Bayesian framework. In this talk, I will discuss the current state of model selection in Bayesian deep learning, together with some of my recent work towards this. I will discuss some theoretically grounded successes in Deep Gaussian Processes and in connecting ensembling to Bayesian inference, as well as recent empirical work on Neural Architecture Search. To finish, I would like to speculate on possible other benefits that the Bayesian framework can provide, in particular relating to asynchronous computation, and how this can potentially benefit from new hardware.