Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

An Automatic Classification of the Early Osteonecrosis of Femoral Head with Deep Learning.

In Current medical imaging

BACKGROUND : Osteonecrosis of Femoral Head (ONFH) is a common complication in orthopaedics, wherein femoral structures are usually damaged due to the impairment or interruption of femoral head blood supply.

AIM : In this study, an automatic approach for the classification of the early ONFH with deep learning has been proposed.

METHODS : All femoral CT slices according to their spatial locations with the Convolutional Neural Network (CNN) are first classified. Therefore, all CT slices are divided into upper, middle or lower segments of femur head. Then the femur head areas can be segmented with the Conditional Generative Adversarial Network (CGAN) for each part. The Convolutional Autoencoder is employed to reduce dimensions and extract features of femur head, and finally K-means clustering is used for an unsupervised classification of the early ONFH.

RESULTS : To invalidate the effectiveness of the proposed approach, the experiments on the dataset with 120 patients are carried out. The experimental results show that the segmentation accuracy is higher than 95%. The Convolutional Autoencoder can reduce the dimension of data, the Peak Signal- to-Noise Ratios (PSNRs) are better than 34dB for inputs and outputs. Meanwhile, there is a great intra-category similarity, and a significant inter-category difference.

CONCLUSION : The research on the classification of the early ONFH has a valuable clinical merit, and hopefully it can assist physicians to apply more individualized treatment for patient.

Zhu Liyang, Han Jungang, Guo Renwen, Wu Dong, Wei Qiang, Chai Wei, Tang Shaojie


K-means clustering, Osteonecrosis of femoral head, conditional generative adversarial network, convolutional\nautoencoder, convolutional neural network, peak signal-to-noise ratios

General General

Customer Centricity in Medical Affairs Needs Human-centric Artificial Intelligence.

In Pharmaceutical medicine

The evolution of healthcare, together with the changing behaviour of healthcare professionals, means that medical affairs functions of pharmaceutical organisations are constantly reinventing themselves. The emergence of digital ways of working, expedited by the COVID-19 pandemic, means that pharmaceutical-healthcare relationships are evolving to operate in an increasingly virtual world. The value of the pharmaceutical medical affairs function is dependent on understanding customers' needs and providing the right knowledge at the right time to physicians. This requires a human-centric artificial intelligence (AI) approach for medical affairs, which allows the function to query internal and external data sets in a conversational format and receive timely, accurate and concise intelligence on their customers.

Bedenkov Alexander, Moreno Carmen, Agustin Lyra, Jain Nipun, Newman Amy, Feng Lana, Kostello Greg


General General

SplitSR: An End-to-End Approach to Super-Resolution on Mobile Devices

ArXiv Preprint

Super-resolution (SR) is a coveted image processing technique for mobile apps ranging from the basic camera apps to mobile health. Existing SR algorithms rely on deep learning models with significant memory requirements, so they have yet to be deployed on mobile devices and instead operate in the cloud to achieve feasible inference time. This shortcoming prevents existing SR methods from being used in applications that require near real-time latency. In this work, we demonstrate state-of-the-art latency and accuracy for on-device super-resolution using a novel hybrid architecture called SplitSR and a novel lightweight residual block called SplitSRBlock. The SplitSRBlock supports channel-splitting, allowing the residual blocks to retain spatial information while reducing the computation in the channel dimension. SplitSR has a hybrid design consisting of standard convolutional blocks and lightweight residual blocks, allowing people to tune SplitSR for their computational budget. We evaluate our system on a low-end ARM CPU, demonstrating both higher accuracy and up to 5 times faster inference than previous approaches. We then deploy our model onto a smartphone in an app called ZoomSR to demonstrate the first-ever instance of on-device, deep learning-based SR. We conducted a user study with 15 participants to have them assess the perceived quality of images that were post-processed by SplitSR. Relative to bilinear interpolation -- the existing standard for on-device SR -- participants showed a statistically significant preference when looking at both images (Z=-9.270, p<0.01) and text (Z=-6.486, p<0.01).

Xin Liu, Yuang Li, Josh Fromm, Yuntao Wang, Ziheng Jiang, Alex Mariakakis, Shwetak Patel


General General

Isogeometric finite element-based simulation of the aortic heart valve: Integration of neural network structural material model and structural tensor fiber architecture representations.

In International journal for numerical methods in biomedical engineering

The functional complexity of native and replacement aortic heart valves are well known, incorporating such physical phenomenons as time-varying non-linear anisotropic soft tissue mechanical behavior, geometric non-linearity, complex multi-surface time varying contact, and fluid-structure interactions to name a few. It is thus clear that computational simulations are critical in understanding AV function and for the rational basis for design of their replacements. However, such approaches continued to be limited by ad-hoc approaches for incorporating tissue fibrous structure, high-fidelity material models, and valve geometry. To this end, we developed an integrated tri-leaflet valve pipeline built upon an isogeometric analysis (IGA) framework. A high-order structural tensor (HOST) based method was developed for efficient storage and mapping the two-dimensional fiber structural data onto the valvular 3D geometry. We then developed a neural network (NN) material model that learned the responses of a detailed mesostructural model for exogenously cross-linked planar soft tissues. The NN material model not only reproduced the full anisotropic mechanical responses but also demonstrated a considerable efficiency improvement, as it was trained over a range of realizable fibrous structures. Results of parametric simulations were then performed, as well as population based bicuspid aortic heart valve fiber structure, that demonstrated the efficiency and robustness of the present approach. In summary, the present approach that integrates HOST and NN material model provides an efficient computational analysis framework with increased physical and functional realism for the simulation of native and replacement tri-leaflet heart valves. This article is protected by copyright. All rights reserved.

Zhang Wenbo, Rossini Giovanni, Kamensky David, Bui-Thanh Tan, Sacks Michael S


Constitutive model, Heart valves, Machine learning

Surgery Surgery

Deep learning-based X-ray inpainting for improving spinal 2D-3D registration.

In The international journal of medical robotics + computer assisted surgery : MRCAS

BACKGROUND : 2D-3D registration is challenging in the presence of implant projections on intraoperative images, which can limit the registration capture range. Here we investigate the use of deep-learning-based inpainting for removing implant projections from the X-rays to improve the registration performance.

METHODS : We trained deep-learning based inpainting models that can fill in the implant projections on X-rays. Clinical datasets were collected to evaluate the inpainting based on six image similarity measures. The effect of X-ray inpainting on capture range of 2D-3D registration was also evaluated.

RESULTS : The X-ray inpainting significantly improved the similarity between the inpainted images and the ground truth. When applying inpainting before the 2D-3D registration process, we demonstrated significant recovery of the capture range by up to 85%.

CONCLUSION : Applying deep-learning-based inpainting on X-ray images masked by implants can markedly improve the capture range of the associated 2D-3D registration task. This article is protected by copyright. All rights reserved.

Esfandiari Hooman, Weidert Simon, Kövesházi István, Anglin Carolyn, Street John, Hodgson Antony J


2D-3D registration, Capture range, Convolutional neural network, Deep learning, Inpainting, Medical image registration, Pedicle screw, Spine, X-ray

General General

Intelligent humanoid robots expressing artificial humanlike empathy in nursing situations.

In Nursing philosophy : an international journal for healthcare professionals

Intelligent humanoid robots (IHRs) are becoming likely to be integrated into nursing practice. However, a proper integration of IHRs requires a detailed description and explanation of their essential capabilities, particularly regarding their competencies in replicating and portraying emotive functions such as empathy. Existing humanoid robots can exhibit rudimentary forms of empathy; as these machines slowly become commonplace in healthcare settings, they will be expected to express empathy as a natural function, rather than merely to portray artificial empathy as a replication of human empathy. This article works with a twofold purpose: firstly, to consider the impact of artificial empathy in nursing and, secondly, to describe the influence of Affective Developmental Robotics (ADR) in anticipation of the empathic behaviour presented by artificial humanoid robots. The ADR has demonstrated that it can be one means by which humanoid nurse robots can achieve expressions of more relatable artificial empathy. This will be one of the vital models for intelligent humanoid robots currently in nurse robot development for the healthcare industry. A discussion of IHRs demonstrating artificial empathy is critical to nursing practice today, particularly in healthcare settings dense with technology.

Pepito Joseph Andrew, Ito Hirokazu, Betriana Feni, Tanioka Tetsuya, Locsin Rozzano C


affective developmental robotics, artificial empathy, artificial intelligence, humanoid nurse robots, intelligent humanoid robots, nursing