Gaussian Process Encoders: VAEs with Reliable Latent-Space Uncertainty

Abstract

Variational autoencoders are a versatile class of deep latent variable models. They learn expressive latent representations of high dimensional data. However, the latent variance is not a reliable estimate of how uncertain the model is about a given input point. We address this issue by introducing a sparse Gaussian process encoder. The Gaussian process leads to more reliable uncertainty estimates in the latent space. We investigate the implications of replacing the neural network encoder with a Gaussian process in light of recent research. We then demonstrate how the Gaussian Process encoder generates reliable uncertainty estimates while maintaining good likelihood estimates on a range of anomaly detection problems. Finally, we investigate the sensitivity to noise in the training data and show how an appropriate choice of Gaussian process kernel can lead to automatic relevance determination.

Related

May 2021 | CHI

Towards Fairness in Practice: A Practitioner-Oriented Rubric for Evaluating Fair ML Toolkits

Brianna Richardson, Jean Garcia-Gathright, Samuel F. Way, Jennifer Thom, Henriette Cramer

April 2021 | The Web Conference

Where To Next? A Dynamic Model of User Preferences

Francesco Sanna Passino, Lucas Maystre, Dmitrii Moor, Ashton Anderson, Mounia Lalmas

April 2021 | AISTATS

Collaborative Classification from Noisy Labels

Lucas Maystre, Nagarjuna Kumarappan, Judith Bütepage, Mounia Lalmas