In Journal of the American Medical Informatics Association : JAMIA
OBJECTIVE : Distributed learning avoids problems associated with central data collection by training models locally at each site. This can be achieved by federated learning (FL) aggregating multiple models that were trained in parallel or training a single model visiting sites sequentially, the traveling model (TM). While both approaches have been applied to medical imaging tasks, their performance in limited local data scenarios remains unknown. In this study, we specifically analyze FL and TM performances when very small sample sizes are available per site.
MATERIALS AND METHODS : 2025 T1-weighted magnetic resonance imaging scans were used to investigate the effect of sample sizes on FL and TM for brain age prediction. We evaluated models across 18 scenarios varying the number of samples per site (1, 2, 5, 10, and 20) and the number of training rounds (20, 40, and 200).
RESULTS : Our results demonstrate that the TM outperforms FL, for every sample size examined. In the extreme case when each site provided only one sample, FL achieved a mean absolute error (MAE) of 18.9 ± 0.13 years, while the TM achieved a MAE of 6.21 ± 0.50 years, comparable to central learning (MAE = 5.99 years).
DISCUSSION : Although FL is more commonly used, our study demonstrates that TM is the best implementation for small sample sizes.
CONCLUSION : The TM offers new opportunities to apply machine learning models in rare diseases and pediatric research but also allows even small hospitals to contribute small datasets.
Souza Raissa, Mouches Pauline, Wilms Matthias, Tuladhar Anup, Langner Sönke, Forkert Nils D
2022-Oct-26
brain age prediction, distributed learning, machine learning