Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Radiology Radiology

Automated Segmentation of Thyroid Nodule, Gland, and Cystic Components From Ultrasound Images Using Deep Learning.

In IEEE access : practical innovations, open solutions

Sonographic features associated with margins, shape, size, and volume of thyroid nodules are used to assess their risk of malignancy. Automatically segmenting nodules from normal thyroid gland would enable an automated estimation of these features. A novel multi-output convolutional neural network algorithm with dilated convolutional layers is presented to segment thyroid nodules, cystic components inside the nodules, and normal thyroid gland from clinical ultrasound B-mode scans. A prospective study was conducted, collecting data from 234 patients undergoing a thyroid ultrasound exam before biopsy. The training and validation sets encompassed 188 patients total; the testing set consisted of 48 patients. The algorithm effectively segmented thyroid anatomy into nodules, normal gland, and cystic components. The algorithm achieved a mean Dice coefficient of 0.76, a mean true positive fraction of 0.90, and a mean false positive fraction of 1.61×10-6. The values are on par with a conventional seeded algorithm. The proposed algorithm eliminates the need for a seed in the segmentation process, thus automatically detecting and segmenting the thyroid nodules and cystic components. The detection rate for thyroid nodules and cystic components was 82% and 44%, respectively. The inference time per image, per fold was 107ms. The mean error in volume estimation of thyroid nodules for five select cases was 7.47%. The algorithm can be used for detection, segmentation, size estimation, volume estimation, and generating thyroid maps for thyroid nodules. The algorithm has applications in point of care, mobile health monitoring, improving workflow, reducing localization time, and assisting sonographers with limited expertise.

Kumar Viksit, Webb Jeremy, Gregory Adriana, Meixner Duane D, Knudsen John M, Callstrom Matthew, Fatemi Mostafa, Alizad Azra


Deep learning, segmentation, thyroid nodule, thyroid nodule volume, ultrasound

Surgery Surgery

Automated Quality Assessment and Image Selection of Ultra-Widefield Fluorescein Angiography Images through Deep Learning.

In Translational vision science & technology

Purpose : Numerous angiographic images with high variability in quality are obtained during each ultra-widefield fluorescein angiography (UWFA) acquisition session. This study evaluated the feasibility of an automated system for image quality classification and selection using deep learning.

Methods : The training set was comprised of 3543 UWFA images. Ground-truth image quality was assessed by expert image review and classified into one of four categories (ungradable, poor, good, or best) based on contrast, field of view, media opacity, and obscuration from external features. Two test sets, including randomly selected 392 images separated from the training set and an independent balanced image set composed of 50 ungradable/poor and 50 good/best images, assessed the model performance and bias.

Results : In the randomly selected and balanced test sets, the automated quality assessment system showed overall accuracy of 89.0% and 94.0% for distinguishing between gradable and ungradable images, with sensitivity of 90.5% and 98.6% and specificity of 87.0% and 81.5%, respectively. The receiver operating characteristic curve measuring performance of two-class classification (ungradable and gradable) had an area under the curve of 0.920 in the randomly selected set and 0.980 in the balanced set.

Conclusions : A deep learning classification model demonstrates the feasibility of automatic classification of UWFA image quality. Clinical application of this system might greatly reduce manual image grading workload, allow quality-based image presentation to clinicians, and provide near-instantaneous feedback on image quality during image acquisition for photographers.

Translational Relevance : The UWFA image quality classification tool may significantly reduce manual grading for clinical- and research-related work, providing instantaneous and reliable feedback on image quality.

Li Henry H, Abraham Joseph R, Sevgi Duriye Damla, Srivastava Sunil K, Hach Jenna M, Whitney Jon, Vasanji Amit, Reese Jamie L, Ehlers Justis P


diabetic retinopathy, fluorescein angiography, retinal blood flow, retinal vasculature

General General

PI-Net: A Deep Learning Approach to Extract Topological Persistence Images.

In Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops

Topological features such as persistence diagrams and their functional approximations like persistence images (PIs) have been showing substantial promise for machine learning and computer vision applications. This is greatly attributed to the robustness topological representations provide against different types of physical nuisance variables seen in real-world data, such as view-point, illumination, and more. However, key bottlenecks to their large scale adoption are computational expenditure and difficulty incorporating them in a differentiable architecture. We take an important step in this paper to mitigate these bottlenecks by proposing a novel one-step approach to generate PIs directly from the input data. We design two separate convolutional neural network architectures, one designed to take in multi-variate time series signals as input and another that accepts multi-channel images as input. We call these networks Signal PI-Net and Image PI-Net respectively. To the best of our knowledge, we are the first to propose the use of deep learning for computing topological features directly from data. We explore the use of the proposed PI-Net architectures on two applications: human activity recognition using tri-axial accelerometer sensor data and image classification. We demonstrate the ease of fusion of PIs in supervised deep learning architectures and speed up of several orders of magnitude for extracting PIs from data. Our code is available at

Som Anirudh, Choi Hongjun, Ramamurthy Karthikeyan Natesan, Buman Matthew P, Turaga Pavan


General General

Automatic speech recognition in the operating room - An essential contemporary tool or a redundant gadget? A survey evaluation among physicians in form of a qualitative study.

In Annals of medicine and surgery (2012)

Introduction : For decades, automatic speech recognition (ASR) has been the subject of research and its range of applications broadened. Presently, ASR among physicians is mainly used to convert speech into text but not to implement instructions in the operating room (OR). This study aimed to evaluate physicians of different surgical professions on their personal experience and posture towards ASR.

Methods : A 16-item survey was distributed electronically to hospitals and outpatient clinics in southern Germany addressing physicians on the potential applications of ASR in the OR.

Results : The survey was responded by 185 of 2693 physicians (response rate: 6.9%) with a mean age of 41.8 ± 9.8 years. ASR is desirable in the OR regardless of the field of speciality (93.7%). While only 2.7% have used ASR, 87.9% evaluate its future potential as high. 91.0% of those working in a university hospital would consider testing ASR, while 67.5% of those in non-university hospitals and practices (p = 0.001). 90.1% of responders of strictly surgical specialities see potential in ASR while 73.7% in non-surgical specialities evaluate its future potential as high (p = 0.01). 58.3% of those over the age of 60 consider the use of ASR without a headset to be imaginable, while 96.3% among those under the age of 60. There were no statistically significant differences regarding sex and professional position.

Conclusion : Foreseeably, ASR is anticipated to be integrated into ORs and valued at a high market potential. Our study provides information about physicians' individual preferences from various surgical disciplines regarding ASR.

Schulte Antonia, Suarez-Ibarrola Rodrigo, Wegen Daniel, Pohlmann Philippe-Fabian, Petersen Elina, Miernik Arkadiusz


Artificial intelligence, Automatic speech recognition, Intelligent operating assistance, Machine learning, Operating room of the future, Speech understanding, Voice recognition

General General

A bird's-eye view of deep learning in bioimage analysis.

In Computational and structural biotechnology journal

Deep learning of artificial neural networks has become the de facto standard approach to solving data analysis problems in virtually all fields of science and engineering. Also in biology and medicine, deep learning technologies are fundamentally transforming how we acquire, process, analyze, and interpret data, with potentially far-reaching consequences for healthcare. In this mini-review, we take a bird's-eye view at the past, present, and future developments of deep learning, starting from science at large, to biomedical imaging, and bioimage analysis in particular.

Meijering Erik


Artificial neural networks, Bioimage analysis, Computer vision, Deep learning, Microscopy imaging

oncology Oncology

Artificial intelligence (AI) and big data in cancer and precision oncology.

In Computational and structural biotechnology journal

Artificial intelligence (AI) and machine learning have significantly influenced many facets of the healthcare sector. Advancement in technology has paved the way for analysis of big datasets in a cost- and time-effective manner. Clinical oncology and research are reaping the benefits of AI. The burden of cancer is a global phenomenon. Efforts to reduce mortality rates requires early diagnosis for effective therapeutic interventions. However, metastatic and recurrent cancers evolve and acquire drug resistance. It is imperative to detect novel biomarkers that induce drug resistance and identify therapeutic targets to enhance treatment regimes. The introduction of the next generation sequencing (NGS) platforms address these demands, has revolutionised the future of precision oncology. NGS offers several clinical applications that are important for risk predictor, early detection of disease, diagnosis by sequencing and medical imaging, accurate prognosis, biomarker identification and identification of therapeutic targets for novel drug discovery. NGS generates large datasets that demand specialised bioinformatics resources to analyse the data that is relevant and clinically significant. Through these applications of AI, cancer diagnostics and prognostic prediction are enhanced with NGS and medical imaging that delivers high resolution images. Regardless of the improvements in technology, AI has some challenges and limitations, and the clinical application of NGS remains to be validated. By continuing to enhance the progression of innovation and technology, the future of AI and precision oncology show great promise.

Dlamini Zodwa, Francies Flavia Zita, Hull Rodney, Marima Rahaba


AI, Artificial Intelligence, Artificial intelligence, Big datasets, CNV, Copy Number Variations, Deep learning, Diagnosis, Digital pathology, FFPE, Formalin-Fixed Paraffin-Embedded, LYNA, LYmph Node Assistant, ML, Machine Learning, Machine learning, Medical imaging, NGS and bioinformatics, NGS, Next Generation Sequencing, Precision oncology, Prognosis and drug discovery, TCGA, The Cancer Genome Atlas, Treatment, WSI, Whole Slide Imaging