Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Statistics and computing

Stochastic processes provide a mathematically elegant way to model complex data. In theory, they provide flexible priors over function classes that can encode a wide range of interesting assumptions. However, in practice efficient inference by optimisation or marginalisation is difficult, a problem further exacerbated with big data and high dimensional input spaces. We propose a novel variational autoencoder (VAE) called the prior encoding variational autoencoder ( π VAE). π VAE is a new continuous stochastic process. We use π VAE to learn low dimensional embeddings of function classes by combining a trainable feature mapping with generative model using a VAE. We show that our framework can accurately learn expressive function classes such as Gaussian processes, but also properties of functions such as their integrals. For popular tasks, such as spatial interpolation, π VAE achieves state-of-the-art performance both in terms of accuracy and computational efficiency. Perhaps most usefully, we demonstrate an elegant and scalable means of performing fully Bayesian inference for stochastic processes within probabilistic programming languages such as Stan.

Mishra Swapnil, Flaxman Seth, Berah Tresnia, Zhu Harrison, Pakkanen Mikko, Bhatt Samir

2022

Bayesian inference, MCMC, Spatio-temporal, VAE