Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

arxiv preprint

Deep learning models have achieved breakthrough successes in domains where data is plentiful. However, such models are prone to overfitting when trained on high-dimensional, low sample size datasets. Furthermore, the black-box nature of such models has limited their application in domains where model trust is critical. As a result, deep learning has struggled to make inroads in domains such as precision medicine, where small sample sizes are the norm and model trust is paramount. Oftentimes, even in low data settings we have some set of prior information on each input feature to our prediction task, which may be related to that feature's relevance to the prediction problem. In this work we propose the learned attribution prior framework to take advantage of such information and alleviate the issues mentioned previously. For a given prediction task, our framework jointly learns a relationship between prior information about a feature and that feature's importance to the task, while also biasing the prediction model to focus on the features with high predicted importance. We find that training models using our framework improves model accuracy in low-data settings. Furthermore, we find that the resulting learned meta-feature to feature relationships open up new avenues for model interpretation.

Ethan Weinberger, Joseph Janizek, Su-In Lee

2019-12-20