ArXiv Preprint
When a deep learning model is sequentially trained on different datasets, it
forgets the knowledge acquired from previous data, a phenomenon known as
catastrophic forgetting. It deteriorates performance of the deep learning model
on diverse datasets, which is critical in privacy-preserving deep learning
(PPDL) applications based on transfer learning (TL). To overcome this, we
propose review learning (RL), a generative-replay-based continual learning
technique that does not require a separate generator. Data samples are
generated from the memory stored within the synaptic weights of the deep
learning model which are used to review knowledge acquired from previous
datasets. The performance of RL was validated through PPDL experiments.
Simulations and real-world medical multi-institutional experiments were
conducted using three types of binary classification electronic health record
data. In the real-world experiments, the global area under the receiver
operating curve was 0.710 for RL and 0.655 for TL. Thus, RL was highly
effective in retaining previously learned knowledge.
Jaesung Yoo, Sunghyuk Choi, Ye Seul Yang, Suhyeon Kim, Jieun Choi, Dongkyeong Lim, Yaeji Lim, Hyung Joon Joo, Dae Jung Kim, Rae Woong Park, Hyeong-Jin Yoon, Kwangsoo Kim
2022-10-17