In Magnetic resonance in medicine ; h5-index 66.0
PURPOSE : The development of advanced estimators for intravoxel incoherent motion (IVIM) modeling is often motivated by a desire to produce smoother parameter maps than least squares (LSQ). Deep neural networks show promise to this end, yet performance may be conditional on a myriad of choices regarding the learning strategy. In this work, we have explored potential impacts of key training features in unsupervised and supervised learning for IVIM model fitting.
METHODS : Two synthetic data sets and one in-vivo data set from glioma patients were used in training of unsupervised and supervised networks for assessing generalizability. Network stability for different learning rates and network sizes was assessed in terms of loss convergence. Accuracy, precision, and bias were assessed by comparing estimations against ground truth after using different training data (synthetic and in vivo).
RESULTS : A high learning rate, small network size, and early stopping resulted in sub-optimal solutions and correlations in fitted IVIM parameters. Extending training beyond early stopping resolved these correlations and reduced parameter error. However, extensive training resulted in increased noise sensitivity, where unsupervised estimates displayed variability similar to LSQ. In contrast, supervised estimates demonstrated improved precision but were strongly biased toward the mean of the training distribution, resulting in relatively smooth, yet possibly deceptive parameter maps. Extensive training also reduced the impact of individual hyperparameters.
CONCLUSION : Voxel-wise deep learning for IVIM fitting demands sufficiently extensive training to minimize parameter correlation and bias for unsupervised learning, or demands a close correspondence between the training and test sets for supervised learning.
Kaandorp Misha P T, Zijlstra Frank, Federau Christian, While Peter T
2023-Mar-13
IVIM, diffusion-weighted magnetic resonance imaging, gliomas, intravoxel incoherent motion, supervised deep learning, unsupervised deep learning