ArXiv Preprint
Histopathological tissue classification is a fundamental task in
computational pathology. Deep learning-based models have achieved superior
performance but centralized training with data centralization suffers from the
privacy leakage problem. Federated learning (FL) can safeguard privacy by
keeping training samples locally, but existing FL-based frameworks require a
large number of well-annotated training samples and numerous rounds of
communication which hinder their practicability in the real-world clinical
scenario. In this paper, we propose a universal and lightweight federated
learning framework, named Federated Deep-Broad Learning (FedDBL), to achieve
superior classification performance with limited training samples and only
one-round communication. By simply associating a pre-trained deep learning
feature extractor, a fast and lightweight broad learning inference system and a
classical federated aggregation approach, FedDBL can dramatically reduce data
dependency and improve communication efficiency. Five-fold cross-validation
demonstrates that FedDBL greatly outperforms the competitors with only
one-round communication and limited training samples, while it even achieves
comparable performance with the ones under multiple-round communications.
Furthermore, due to the lightweight design and one-round communication, FedDBL
reduces the communication burden from 4.6GB to only 276.5KB per client using
the ResNet-50 backbone at 50-round training. Since no data or deep model
sharing across different clients, the privacy issue is well-solved and the
model security is guaranteed with no model inversion attack risk. Code is
available at https://github.com/tianpeng-deng/FedDBL.
Tianpeng Deng, Yanqi Huang, Zhenwei Shi, Jiatai Lin, Qi Dou, Ke Zhao, Fang-Fang Liu, Yu-Mian Jia, Jin Wang, Bingchao Zhao, Changhong Liang, Zaiyi Liu, Xiao-jing Guo, Guoqiang Han, Xin Chen, Chu Han
2023-02-24