Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Journal of imaging

Self-supervised learning approaches have seen success transferring between similar medical imaging datasets, however there has been no large scale attempt to compare the transferability of self-supervised models against each other on medical images. In this study, we compare the generalisability of seven self-supervised models, two of which were trained in-domain, against supervised baselines across eight different medical datasets. We find that ImageNet pretrained self-supervised models are more generalisable than their supervised counterparts, scoring up to 10% better on medical classification tasks. The two in-domain pretrained models outperformed other models by over 20% on in-domain tasks, however they suffered significant loss of accuracy on all other tasks. Our investigation of the feature representations suggests that this trend may be due to the models learning to focus too heavily on specific areas.

Anton Jonah, Castelli Liam, Chan Mun Fai, Outters Mathilde, Tang Wan Hee, Cheung Venus, Shukla Pancham, Walambe Rahee, Kotecha Ketan

2022-Dec-01

BYOL, MoCo, PIRL, SWaV, SimCLR, image classification, medical imaging, self-supervised learning