Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

Deep Learning for Diagnosis and Segmentation of Pneumothorax: The Results on The Kaggle Competition and Validation Against Radiologists.

In IEEE journal of biomedical and health informatics

Pneumothorax is potentially a life-threatening disease that requires urgent diagnosis and treatment. The chest X-ray is the diagnostic modality of choice when the pneumothorax is suspected. Computer-aided diagnosis of pneumothorax has got a dramatic boost in the last years due to deep learning advances and the first public pneumothorax diagnosis competition with 15257 chest X-rays manually annotated by a team of 19 radiologists. This paper presents one of the top frameworks that participated in the competition. The framework investigates the benefits of combining the Unet convolutional neural network with various backbones, namely ResNet34, SE-ResNext50, SE-ResNext101, and DenseNet121. The paper presents a step-by-step instruction for the framework application, including data augmentation, and different pre- and post-processing steps. The performance of the framework was of 0.8574 measured in terms of the Dice coefficient. The second contribution of the paper is the comparison of the deep learning framework against three experienced radiologists on the pneumothorax detection and segmentation on challenging X-rays. We also evaluated how diagnostic confidence of radiologists affect the accuracy of the diagnosis and found out that the deep learning framework and radiologists find same X-rays to be easy/difficult to analyze (p-value <1e4). Finally, the methodology of all top-performing teams from the competition leaderboard was analyzed to find the consistent methodological patterns of accurate pneumothorax detection and segmentation.

Tolkachev Alexey, Sirazitdinov Ilyas, Kholiavchenko Maksym, Mustafaev Tamerlan, Ibragimov Bulat

2020-Sep-21

Dermatology Dermatology

Digital Biopsy with Fluorescence Confocal Microscope for Effective Real-time Diagnosis of Prostate Cancer: A Prospective, Comparative Study.

In European urology oncology

BACKGROUND : A microscopic analysis of tissue is the gold standard for cancer detection. Hematoxylin-eosin (HE) for the reporting of prostate biopsy (PB) is conventionally based on fixation, processing, acquisition of glass slides, and analysis with an analog microscope by a local pathologist. Digitalization and real-time remote access to images could enhance the reporting process, and form the basis of artificial intelligence and machine learning. Fluorescence confocal microscopy (FCM), a novel optical technology, enables immediate digital image acquisition in an almost HE-like resolution without requiring conventional processing.

OBJECTIVE : The aim of this study is to assess the diagnostic ability of FCM for prostate cancer (PCa) identification and grading from PB.

DESIGN, SETTING, AND PARTICIPANTS : This is a prospective, comparative study evaluating FCM and HE for prostate tissue interpretation. PBs were performed (March to June 2019) at a single coordinating unit on consecutive patients with clinical and laboratory indications for assessment. FCM digital images (n = 427) were acquired immediately from PBs (from 54 patients) and stored; corresponding glass slides (n = 427) undergoing the conventional HE processing were digitalized and stored as well. A panel of four international pathologists with diverse background participated in the study and was asked to evaluate all images. The pathologists had no FCM expertise and were blinded to clinical data, HE interpretation, and each other's evaluation. All images, FCM and corresponding HE, were assessed for the presence or absence of cancer tissue and cancer grading, when appropriate. Reporting was gathered via a dedicated web platform.

OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS : The primary endpoint is to evaluate the ability of FCM to identify cancer tissue in PB cores (per-slice analysis). FCM outcomes are interpreted by agreement level with HE (K value). Additionally, either FCM or HE outcomes are assessed with interobserver agreement for cancer detection (presence vs absence of cancer) and for the discrimination between International Society of Urologic Pathologists (ISUP) grade = 1 and ISUP grade > 1 (secondary endpoint).

RESULTS AND LIMITATIONS : Overall, 854 images were evaluated from each pathologist. PCa detection of FCM was almost perfectly aligned with HE final reports (95.1% of correct diagnosis with FCM, κ = 0.84). Inter-rater agreement between pathologists was almost perfect for both HE and FCM for PCa detection (0.98 for HE, κ = 0.95; 0.95 for FCM, κ = 0.86); for cancer grade attribution, only a moderate agreement was reached for both HE and FCM (HE, κ = 0.47; FCM, κ = 0.49).

CONCLUSIONS : FCM provides a microscopic, immediate, and seemingly reliable diagnosis for PCa. The real-time acquisition of digital images-without requiring conventional processing-offers opportunities for immediate sharing and reporting. FCM is a promising tool for improvements in cancer diagnostic pathways.

PATIENT SUMMARY : Fluorescence confocal microscopy may provide an immediate, microscopic, and apparently reliable diagnosis of prostate cancer on prostate biopsy, overcoming the standard turnaround time of conventional processing and interpretation.

Rocco Bernardo, Sighinolfi Maria Chiara, Sandri Marco, Spandri Valentina, Cimadamore Alessia, Volavsek Metka, Mazzucchelli Roberta, Lopez-Beltran Antonio, Eissa Ahmed, Bertoni Laura, Azzoni Paola, Reggiani Bonetti Luca, Maiorana Antonino, Puliatti Stefano, Micali Salvatore, Paterlini Maurizio, Iseppi Andrea, Rocco Francesco, Pellacani Giovanni, Chester Johanna, Bianchi Giampaolo, Montironi Rodolfo

2020-Sep-17

Digital pathology, Fluorescence confocal microscope, Prostate biopsy

Radiology Radiology

Temporal changes of COVID-19 pneumonia by mass evaluation using CT: a retrospective multi-center study.

In Annals of translational medicine

Background : Coronavirus disease 2019 (COVID-19) has widely spread worldwide and caused a pandemic. Chest CT has been found to play an important role in the diagnosis and management of COVID-19. However, quantitatively assessing temporal changes of COVID-19 pneumonia over time using CT has still not been fully elucidated. The purpose of this study was to perform a longitudinal study to quantitatively assess temporal changes of COVID-19 pneumonia.

Methods : This retrospective and multi-center study included patients with laboratory-confirmed COVID-19 infection from 16 hospitals between January 19 and March 27, 2020. Mass was used as an approach to quantitatively measure dynamic changes of pulmonary involvement in patients with COVID-19. Artificial intelligence (AI) was employed as image segmentation and analysis tool for calculating the mass of pulmonary involvement.

Results : A total of 581 confirmed patients with 1,309 chest CT examinations were included in this study. The median age was 46 years (IQR, 35-55; range, 4-87 years), and 311 (53.5%) patients were male. The mass of pulmonary involvement peaked on day 10 after the onset of initial symptoms. Furthermore, the mass of pulmonary involvement of older patients (>45 years) was significantly severer (P<0.001) and peaked later (day 11 vs. day 8) than that of younger patients (≤45 years). In addition, there were no significant differences in the peak time (day 10 vs. day 10) and median mass (P=0.679) of pulmonary involvement between male and female.

Conclusions : Pulmonary involvement peaked on day 10 after the onset of initial symptoms in patients with COVID-19. Further, pulmonary involvement of older patients was severer and peaked later than that of younger patients. These findings suggest that AI-based quantitative mass evaluation of COVID-19 pneumonia hold great potential for monitoring the disease progression.

Wang Chao, Huang Peiyu, Wang Lihua, Shen Zhujing, Lin Bin, Wang Qiyuan, Zhao Tongtong, Zheng Hanpeng, Ji Wenbin, Gao Yuantong, Xia Junli, Cheng Jianmin, Ma Jianbing, Liu Jun, Liu Yongqiang, Su Miaoguang, Ruan Guixiang, Shu Jiner, Ren Dawei, Zhao Zhenhua, Yao Weigen, Yang Yunjun, Liu Bo, Zhang Minming

2020-Aug

Coronavirus disease 2019 (COVID-19), artificial intelligence (AI), chest CT, temporal changes

General General

The Use of AI for Thermal Emotion Recognition: A Review of Problems and Limitations in Standard Design and Data

ArXiv Preprint

With the increased attention on thermal imagery for Covid-19 screening, the public sector may believe there are new opportunities to exploit thermal as a modality for computer vision and AI. Thermal physiology research has been ongoing since the late nineties. This research lies at the intersections of medicine, psychology, machine learning, optics, and affective computing. We will review the known factors of thermal vs. RGB imaging for facial emotion recognition. But we also propose that thermal imagery may provide a semi-anonymous modality for computer vision, over RGB, which has been plagued by misuse in facial recognition. However, the transition to adopting thermal imagery as a source for any human-centered AI task is not easy and relies on the availability of high fidelity data sources across multiple demographics and thorough validation. This paper takes the reader on a short review of machine learning in thermal FER and the limitations of collecting and developing thermal FER data for AI training. Our motivation is to provide an introductory overview into recent advances for thermal FER and stimulate conversation about the limitations in current datasets.

Catherine Ordun, Edward Raff, Sanjay Purushotham

2020-09-22

General General

Deep Temporal-Spatial Feature Learning for Motor Imagery-based Brain-Computer Interfaces.

In IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society

Motor imagery (MI) decoding is an important part of brain-computer interface (BCI) research, which translates the subject's intentions into commands that external devices can execute. The traditional methods for discriminative feature extraction, such as common spatial pattern (CSP) and filter bank common spatial pattern (FBCSP), have only focused on the energy features of the electroencephalography (EEG) and thus ignored the further exploration of temporal information. However, the temporal information of spatially filtered EEG may be critical to the performance improvement of MI decoding. In this paper, we proposed a deep learning approach termed filter-bank spatial filtering and temporal-spatial convolutional neural network (FBSF-TSCNN) for MI decoding, where the FBSF block transforms the raw EEG signals into an appropriate intermediate EEG presentation, and then the TSCNN block decodes the intermediate EEG signals. Moreover, a novel stage-wise training strategy is proposed to mitigate the difficult optimization problem of the TSCNN block in the case of insufficient training samples. Firstly, the feature extraction layers are trained by optimization of the triplet loss. Then, the classification layers are trained by optimization of the cross-entropy loss. Finally, the entire network (TSCNN) is fine-tuned by the back-propagation (BP) algorithm. Experimental evaluations on the BCI IV 2a and SMR-BCI datasets reveal that the proposed stage-wise training strategy yields significant performance improvement compared with the conventional end-to-end training strategy, and the proposed approach is comparable with the state-of-the-art method.

Chen Junjian, Yu Zhu Liang, Gu Zhenghui, Li Yuanqing

2020-Sep-21

General General

STFlow: Self-Taught Optical Flow Estimation Using Pseudo Labels.

In IEEE transactions on image processing : a publication of the IEEE Signal Processing Society

The Deep learning of optical flow has been an active area for its empirical success. For the difficulty of obtaining accurate dense correspondence labels, unsupervised learning of optical flow has drawn more and more attention, while the accuracy is still far from satisfaction. By holding the philosophy that better estimation models can be trained with betterapproximated labels, which in turn can be obtained from better estimation models, we propose a self-taught learning framework to continually improve the accuracy using self-generated pseudo labels. The estimated optical flow is first filtered by bidirectional flow consistency validation and occlusion-aware dense labels are then generated by edge-aware interpolation from selected sparse matches. Moreover, by combining reconstruction loss with regression loss on the generated pseudo labels, the performance is further improved. The experimental results demonstrate that our models achieve state-of-the-art results among unsupervised methods on the public KITTI, MPI-Sintel and Flying Chairs datasets.

Ren Zhe, Luo Wenhan, Yan Junchi, Liao Wenlong, Yang Xiaokang, Yuille Alan, Zha Hongyuan, Zha Hongyuan

2020-Sep-21