Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

ArXiv Preprint

Variational Auto-encoders (VAEs) are deep generative latent variable models that are widely used for a number of downstream tasks. While it has been demonstrated that VAE training can suffer from a number of pathologies, existing literature lacks characterizations of exactly when these pathologies occur and how they impact down-stream task performance. In this paper we concretely characterize conditions under which VAE training exhibits pathologies and connect these failure modes to undesirable effects on specific downstream tasks - learning compressed and disentangled representations, adversarial robustness and semi-supervised learning.

Yaniv Yacoby, Weiwei Pan, Finale Doshi-Velez

2020-07-14