In this talk I will mainly discuss “Variational Prediction” [1], a way of directly approximating the predictive distribution, rather than the posterior over parameters. This will then lead to the question of whether we can do transductive learning with Bayesian inference. Transductive learning assumes that it is known what predictions the model will be asked to make at test time. It seems that transductive learning should be easier than the typical inductive learning, where we want to generalise to any prediction. I will discuss criteria under which we can say that transductive learning has taken place, and ask the question whether (approximate) Bayesian inference can do transductive learning. I will then apply variational prediction to Gaussian processes, and show that no transductive learning takes place. At this point, I would like to hear opinions about whether or not doing transductive learning is possible with (approximate) Bayesian inference.