ArXiv Preprint
Medical Image Segmentation is a useful application for medical image analysis
including detecting diseases and abnormalities in imaging modalities such as
MRI, CT etc. Deep learning has proven to be promising for this task but usually
has a low accuracy because of the lack of appropriate publicly available
annotated or segmented medical datasets. In addition, the datasets that are
available may have a different texture because of different dosage values or
scanner properties than the images that need to be segmented. This paper
presents a StyleGAN-driven approach for segmenting publicly available large
medical datasets by using readily available extremely small annotated datasets
in similar modalities. The approach involves augmenting the small segmented
dataset and eliminating texture differences between the two datasets. The
dataset is augmented by being passed through six different StyleGANs that are
trained on six different style images taken from the large non-annotated
dataset we want to segment. Specifically, style transfer is used to augment the
training dataset. The annotations of the training dataset are hence combined
with the textures of the non-annotated dataset to generate new anatomically
sound images. The augmented dataset is then used to train a U-Net segmentation
network which displays a significant improvement in the segmentation accuracy
in segmenting the large non-annotated dataset.
Soham Bhosale, Arjun Krishna, Ge Wang, Klaus Mueller
2023-02-07