Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In IEEE transactions on medical imaging ; h5-index 74.0

As the labeled anomalous medical images are usually difficult to acquire, especially for rare diseases, the deep learning based methods, which heavily rely on the large amount of labeled data, cannot yield a satisfactory performance. Compared to the anomalous data, the normal images without the need of lesion annotation are much easier to collect. In this paper, we propose an anomaly detection framework, namely SALAD, extracting Self-supervised and trAnsLation-consistent features for Anomaly Detection. The proposed SALAD is a reconstruction-based method, which learns the manifold of normal data through an encode-and-reconstruct translation between image and latent spaces. In particular, two constraints (i.e., structure similarity loss and center constraint loss) are proposed to regulate the cross-space (i.e., image and feature) translation, which enforce the model to learn translation-consistent and representative features from the normal data. Furthermore, a self-supervised learning module is engaged into our framework to further boost the anomaly detection accuracy by deeply exploiting useful information from the raw normal data. An anomaly score, as a measure to separate the anomalous data from the healthy ones, is constructed based on the learned self-supervised-and-translation-consistent features. Extensive experiments are conducted on optical coherence tomography (OCT) and chest X-ray datasets. The experimental results demonstrate the effectiveness of our approach.

Zhao He, Li Yuexiang, He Nanjun, Ma Kai, Fang Leyuan, Li Huiqi, Zheng Yefeng

2021-Jul-01