ArXiv Preprint
Convolutional networks (ConvNets) have achieved promising accuracy for
various anatomical segmentation tasks. Despite the success, these methods can
be sensitive to data appearance variations. Considering the large variability
of scans caused by artifacts, pathologies, and scanning setups, robust ConvNets
are vital for clinical applications, while have not been fully explored. In
this paper, we propose to mitigate the challenge by enabling ConvNets'
awareness of the underlying anatomical invariances among imaging scans.
Specifically, we introduce a fully convolutional Constraint Adoption Module
(CAM) that incorporates probabilistic atlas priors as explicit constraints for
predictions over a locally connected Conditional Random Field (CFR), which
effectively reinforces the anatomical consistency of the labeling outputs. We
design the CAM to be flexible for boosting various ConvNet, and compact for
co-optimizing with ConvNets for fusion parameters that leads to the optimal
performance. We show the advantage of such atlas priors fusion is two-fold with
two brain parcellation tasks. First, our models achieve state-of-the-art
accuracy among ConvNet-based methods on both datasets, by significantly
reducing structural abnormalities of predictions. Second, we can largely boost
the robustness of existing ConvNets, proved by: (i) testing on scans with
synthetic pathologies, and (ii) training and evaluation on scans of different
scanning setups across datasets. Our method is proposing to be easily adopted
to existing ConvNets by fine-tuning with CAM plugged in for accuracy and
robustness boosts.
Yuan Liang, Weinan Song, Jiawei Yang, Liang Qiu, Kun Wang, Lei He
2021-02-02