Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In IEEE transactions on signal processing : a publication of the IEEE Signal Processing Society

Probabilistic generative models are attractive for scientific modeling because their inferred parameters can be used to generate hypotheses and design experiments. This requires that the learned model provides an accurate representation of the input data and yields a latent space that effectively predicts outcomes relevant to the scientific question. Supervised Variational Autoencoders (SVAEs) have previously been used for this purpose, as a carefully designed decoder can be used as an interpretable generative model of the data, while the supervised objective ensures a predictive latent representation. Unfortunately, the supervised objective forces the encoder to learn a biased approximation to the generative posterior distribution, which renders the generative parameters unreliable when used in scientific models. This issue has remained undetected as reconstruction losses commonly used to evaluate model performance do not detect bias in the encoder. We address this previously-unreported issue by developing a second-order supervision framework (SOS-VAE) that updates the decoder parameters, rather than the encoder, to induce a predictive latent representation. This ensures that the encoder maintains a reliable posterior approximation and the decoder parameters can be effectively interpreted. We extend this technique to allow the user to trade-off the bias in the generative parameters for improved predictive performance, acting as an intermediate option between SVAEs and our new SOS-VAE. We also use this methodology to address missing data issues that often arise when combining recordings from multiple scientific experiments. We demonstrate the effectiveness of these developments using synthetic data and electrophysiological recordings with an emphasis on how our learned representations can be used to design scientific experiments.

Tu Liyun, Talbot Austin, Gallagher Neil M, Carlson David E

2022

interpretable models, probabilistic generative models, scientific analysis, second-order gradient, supervised learning, variational autoencoders