Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

A Decision Tree-Initialised Neuro-fuzzy Approach for Clinical Decision Support.

In Artificial intelligence in medicine ; h5-index 34.0

Apart from the need for superior accuracy, healthcare applications of intelligent systems also demand the deployment of interpretable machine learning models which allow clinicians to interrogate and validate extracted medical knowledge. Fuzzy rule-based models are generally considered interpretable that are able to reflect the associations between medical conditions and associated symptoms, through the use of linguistic if-then statements. Systems built on top of fuzzy sets are of particular appealing to medical applications since they enable the tolerance of vague and imprecise concepts that are often embedded in medical entities such as symptom description and test results. They facilitate an approximate reasoning framework which mimics human reasoning and supports the linguistic delivery of medical expertise often expressed in statements such as 'weight low' or 'glucose level high' while describing symptoms. This paper proposes an approach by performing data-driven learning of accurate and interpretable fuzzy rule bases for clinical decision support. The approach starts with the generation of a crisp rule base through a decision tree learning mechanism, capable of capturing simple rule structures. The crisp rule base is then transformed into a fuzzy rule base, which forms the input to the framework of adaptive network-based fuzzy inference system (ANFIS), thereby further optimising the parameters of both rule antecedents and consequents. Experimental studies on popular medical data benchmarks demonstrate that the proposed work is able to learn compact rule bases involving simple rule antecedents, with statistically better or comparable performance to those achieved by state-of-the-art fuzzy classifiers.

Chen Tianhua, Shang Changjing, Su Pan, Keravnou-Papailiou Elpida, Zhao Yitian, Antoniou Grigoris, Shen Qiang

2021-Jan

Clinical decision support, Fuzzy rule-based systems, Medical diagnostic systems

General General

Cartesian genetic programming for diagnosis of Parkinson disease through handwriting analysis: Performance vs. interpretability issues.

In Artificial intelligence in medicine ; h5-index 34.0

In the last decades, early disease identification through non-invasive and automatic methodologies has gathered increasing interest from the scientific community. Among others, Parkinson's disease (PD) has received special attention in that it is a severe and progressive neuro-degenerative disease. As a consequence, early diagnosis would provide more effective and prompt care strategies, that cloud successfully influence patients' life expectancy. However, the most performing systems implement the so called black-box approach, which do not provide explicit rules to reach a decision. This lack of interpretability, has hampered the acceptance of those systems by clinicians and their deployment on the field. In this context, we perform a thorough comparison of different machine learning (ML) techniques, whose classification results are characterized by different levels of interpretability. Such techniques were applied for automatically identify PD patients through the analysis of handwriting and drawing samples. Results analysis shows that white-box approaches, such as Cartesian Genetic Programming and Decision Tree, allow to reach a twofold goal: support the diagnosis of PD and obtain explicit classification models, on which only a subset of features (related to specific tasks) were identified and exploited for classification. Obtained classification models provide important insights for the design of non-invasive, inexpensive and easy to administer diagnostic protocols. Comparison of different ML approaches (in terms of both accuracy and interpretability) has been performed on the features extracted from the handwriting and drawing samples included in the publicly available PaHaW and NewHandPD datasets. The experimental findings show that the Cartesian Genetic Programming outperforms the white-box methods in accuracy and the black-box ones in interpretability.

Parziale A, Senatore R, Della Cioppa A, Marcelli A

2021-Jan

Evolutionary computation, Explainable artificial intelligence, Parkinson disease

Surgery Surgery

Using interpretability approaches to update "black-box" clinical prediction models: an external validation study in nephrology.

In Artificial intelligence in medicine ; h5-index 34.0

Despite advances in machine learning-based clinical prediction models, only few of such models are actually deployed in clinical contexts. Among other reasons, this is due to a lack of validation studies. In this paper, we present and discuss the validation results of a machine learning model for the prediction of acute kidney injury in cardiac surgery patients initially developed on the MIMIC-III dataset when applied to an external cohort of an American research hospital. To help account for the performance differences observed, we utilized interpretability methods based on feature importance, which allowed experts to scrutinize model behavior both at the global and local level, making it possible to gain further insights into why it did not behave as expected on the validation cohort. The knowledge gleaned upon derivation can be potentially useful to assist model update during validation for more generalizable and simpler models. We argue that interpretability methods should be considered by practitioners as a further tool to help explain performance differences and inform model update in validation studies.

da Cruz Harry Freitas, Pfahringer Boris, Martensen Tom, Schneider Frederic, Meyer Alexander, Böttinger Erwin, Schapranow Matthieu-P

2021-Jan

Clinical predictive modeling, Interpretability methods, Nephrology, Validation

General General

Collaborative Federated Learning For Healthcare: Multi-Modal COVID-19 Diagnosis at the Edge

ArXiv Preprint

Despite significant improvements over the last few years, cloud-based healthcare applications continue to suffer from poor adoption due to their limitations in meeting stringent security, privacy, and quality of service requirements (such as low latency). The edge computing trend, along with techniques for distributed machine learning such as federated learning, have gained popularity as a viable solution in such settings. In this paper, we leverage the capabilities of edge computing in medicine by analyzing and evaluating the potential of intelligent processing of clinical visual data at the edge allowing the remote healthcare centers, lacking advanced diagnostic facilities, to benefit from the multi-modal data securely. To this aim, we utilize the emerging concept of clustered federated learning (CFL) for an automatic diagnosis of COVID-19. Such an automated system can help reduce the burden on healthcare systems across the world that has been under a lot of stress since the COVID-19 pandemic emerged in late 2019. We evaluate the performance of the proposed framework under different experimental setups on two benchmark datasets. Promising results are obtained on both datasets resulting in comparable results against the central baseline where the specialized models (i.e., each on a specific type of COVID-19 imagery) are trained with central data, and improvements of 16\% and 11\% in overall F1-Scores have been achieved over the multi-modal model trained in the conventional Federated Learning setup on X-ray and Ultrasound datasets, respectively. We also discuss in detail the associated challenges, technologies, tools, and techniques available for deploying ML at the edge in such privacy and delay-sensitive applications.

Adnan Qayyum, Kashif Ahmad, Muhammad Ahtazaz Ahsan, Ala Al-Fuqaha, Junaid Qadir

2021-01-19

Pathology Pathology

Comparative Evaluation of 3D and 2D Deep Learning Techniques for Semantic Segmentation in CT Scans

ArXiv Preprint

Image segmentation plays a pivotal role in several medical-imaging applications by assisting the segmentation of the regions of interest. Deep learning-based approaches have been widely adopted for semantic segmentation of medical data. In recent years, in addition to 2D deep learning architectures, 3D architectures have been employed as the predictive algorithms for 3D medical image data. In this paper, we propose a 3D stack-based deep learning technique for segmenting manifestations of consolidation and ground-glass opacities in 3D Computed Tomography (CT) scans. We also present a comparison based on the segmentation results, the contextual information retained, and the inference time between this 3D technique and a traditional 2D deep learning technique. We also define the area-plot, which represents the peculiar pattern observed in the slice-wise areas of the pathology regions predicted by these deep learning models. In our exhaustive evaluation, 3D technique performs better than the 2D technique for the segmentation of CT scans. We get dice scores of 79% and 73% for the 3D and the 2D techniques respectively. The 3D technique results in a 5X reduction in the inference time compared to the 2D technique. Results also show that the area-plots predicted by the 3D model are more similar to the ground truth than those predicted by the 2D model. We also show how increasing the amount of contextual information retained during the training can improve the 3D model's performance.

Abhishek Shivdeo, Rohit Lokwani, Viraj Kulkarni, Amit Kharat, Aniruddha Pant

2021-01-19

Pathology Pathology

Comparative Evaluation of 3D and 2D Deep Learning Techniques for Semantic Segmentation in CT Scans

ArXiv Preprint

Image segmentation plays a pivotal role in several medical-imaging applications by assisting the segmentation of the regions of interest. Deep learning-based approaches have been widely adopted for semantic segmentation of medical data. In recent years, in addition to 2D deep learning architectures, 3D architectures have been employed as the predictive algorithms for 3D medical image data. In this paper, we propose a 3D stack-based deep learning technique for segmenting manifestations of consolidation and ground-glass opacities in 3D Computed Tomography (CT) scans. We also present a comparison based on the segmentation results, the contextual information retained, and the inference time between this 3D technique and a traditional 2D deep learning technique. We also define the area-plot, which represents the peculiar pattern observed in the slice-wise areas of the pathology regions predicted by these deep learning models. In our exhaustive evaluation, 3D technique performs better than the 2D technique for the segmentation of CT scans. We get dice scores of 79% and 73% for the 3D and the 2D techniques respectively. The 3D technique results in a 5X reduction in the inference time compared to the 2D technique. Results also show that the area-plots predicted by the 3D model are more similar to the ground truth than those predicted by the 2D model. We also show how increasing the amount of contextual information retained during the training can improve the 3D model's performance.

Abhishek Shivdeo, Rohit Lokwani, Viraj Kulkarni, Amit Kharat, Aniruddha Pant

2021-01-19