ArXiv Preprint
Deep Learning models are easily disturbed by variations in the input images
that were not seen during training, resulting in unpredictable behaviours. Such
Out-of-Distribution (OOD) images represent a significant challenge in the
context of medical image analysis, where the range of possible abnormalities is
extremely wide, including artifacts, unseen pathologies, or different imaging
protocols. In this work, we evaluate various uncertainty frameworks to detect
OOD inputs in the context of Multiple Sclerosis lesions segmentation. By
implementing a comprehensive evaluation scheme including 14 sources of OOD of
various nature and strength, we show that methods relying on the predictive
uncertainty of binary segmentation models often fails in detecting outlying
inputs. On the contrary, learning to segment anatomical labels alongside
lesions highly improves the ability to detect OOD inputs.
Benjamin Lambert, Florence Forbes, Senan Doyle, Alan Tucholka, Michel Dojat
2022-11-10