Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Nature communications ; h5-index 260.0

Deep learning algorithms trained on instances that violate the assumption of being independent and identically distributed (i.i.d.) are known to experience destructive interference, a phenomenon characterized by a degradation in performance. Such a violation, however, is ubiquitous in clinical settings where data are streamed temporally from different clinical sites and from a multitude of physiological sensors. To mitigate this interference, we propose a continual learning strategy, entitled CLOPS, that employs a replay buffer. To guide the storage of instances into the buffer, we propose end-to-end trainable parameters, termed task-instance parameters, that quantify the difficulty with which data points are classified by a deep-learning system. We validate the interpretation of these parameters via clinical domain knowledge. To replay instances from the buffer, we exploit uncertainty-based acquisition functions. In three of the four continual learning scenarios, reflecting transitions across diseases, time, data modalities, and healthcare institutions, we show that CLOPS outperforms the state-of-the-art methods, GEM1 and MIR2. We also conduct extensive ablation studies to demonstrate the necessity of the various components of our proposed strategy. Our framework has the potential to pave the way for diagnostic systems that remain robust over time.

Kiyasseh Dani, Zhu Tingting, Clifton David