In Scientific reports ; h5-index 158.0
Federated learning(FL) is a new kind of Artificial Intelligence(AI) aimed at data privacy preservation that builds on decentralizing the training data for the deep learning model. This new technique of data security and privacy sheds light on many critical domains with highly sensitive data, including medical image analysis. Developing a strong, scalable, and precise deep learning model has proven to count on a variety of high-quality data from different centers. However, data holders may not willing to share their data considering the restriction of privacy. In this paper, we approach this challenge with a federated learning paradigm. Specifically, we present a case study on the whole slide image classification problem. At each local client center, a multiple-instance learning classifier is developed to conduct whole slide image classification. We introduce a privacy-preserving federated learning framework based on hyper-network to update the global model. Hyper-network is deployed at the global center that produces the weights of the local network conditioned on its input. In this way, hyper-networks can simultaneously learn a family of the local client networks. Instead of communicating raw data with the local client, only model parameters injected with noise are transferred between the local client and the global model. By using a large scale of whole slide images with only slide-level labels, we mensurated our way on two different whole slide image classification problems. The results demonstrate that our proposed federated learning model based on hyper-network can effectively leverage multi-center data to develop a more accurate model which can be used to classify a whole slide image. Its improvements in terms of over the isolated local centers and the commonly used federated averaging baseline are significant. Code will be available.
Lin Yanfei, Wang Haiyi, Li Weichen, Shen Jun
2023-Jan-31