In IEEE journal of biomedical and health informatics
Deep learning approaches for medical image analysis are limited by small data set size due to multiple factors such as patient privacy and difficulties in obtaining expert labelling for each image. In medical imaging system development pipelines, phases for system development and classification algorithms often overlap with data collection, creating small disjoint data sets collected at numerous locations with differing protocols. In this setting, merging data from different data collection centers increases the amount of training data. However, a direct combination of datasets will likely fail due to domain shifts between imaging centers. In contrast to previous approaches that focus on a single data set, we add a domain adaptation module to a neural network model and train using multiple data sets. Our approach encourages domain invariance between two multispectral autofluorescence imaging (maFLIM) data sets of in vivo oral lesions collected with an imaging system currently in development. The two data sets have differences in the sub-populations imaged and in the calibration procedures used during data collection. We mitigate these differences using a gradient reversal layer and domain classifier. Our final model trained with two data sets substantially increases performance, including a significant increase in specificity. We also achieve a significant increase in average performance over the best baseline model train with two domains ( p = 0.0341). Our approach lays the foundation for faster development of computer aided diagnostic systems and presents a feasible approach for creating a single classifier that robustly diagnoses images from multiple data centers in the presence of domain shifts.
Caughlin Kayla, Duran-Sierra Elvis, Cheng Shuna, Cuenca Rodrigo, Ahmed Beena, Ji Jim, Martinez Mathias, Al-Khalil Moustafa, Al-Enazi Hussain, Cheng Yi-Shing Lisa, Wright John, Jo Javier A, Busso Carlos