Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

bioRxiv Preprint

Much work has been done to apply machine learning and deep learning to genomics tasks, but these applications usually require extensive domain knowledge and the resulting models provide very limited interpretability. Here we present the Nucleic Transformer, a conceptually simple but effective and interpretable model architecture that excels in a variety of DNA/RNA tasks. The Nucleic Transformer processes nucleic acid sequences with self-attention and convolutions, two deep learning techniques that have proved dominant in the fields of computer vision and natural language processing. We demonstrate that the Nucleic Transformer can be trained in both supervised and unsupervised fashion without much domain knowledge to achieve high performance with limited amounts of data in E. coli promoter classification, viral genome identification, and degradation properties of COVID-19 mRNA vaccine candidates. Additionally, we showcase extraction of promoter motifs from learned attention and how direct visualization of self-attention maps assists informed decision making using deep learning models.

He, S.; Gao, B.; Sabnis, R.; Sun, Q.

2021-01-29