Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

A detection method for android application security based on TF-IDF and machine learning.

In PloS one ; h5-index 176.0

Android is the most widely used mobile operating system (OS). A large number of third-party Android application (app) markets have emerged. The absence of third-party market regulation has prompted research institutions to propose different malware detection techniques. However, due to improvements of malware itself and Android system, it is difficult to design a detection method that can efficiently and effectively detect malicious apps for a long time. Meanwhile, adopting more features will increase the complexity of the model and the computational cost of the system. Permissions play a vital role in the security of the Android apps. Term Frequency-Inverse Document Frequency (TF-IDF) is used to assess the importance of a word for a file set in a corpus. The static analysis method does not need to run the app. It can efficiently and accurately extract the permissions from an app. Based on this cognition and perspective, in this paper, a new static detection method based on TF-IDF and Machine Learning is proposed. The system permissions are extracted in Android application package's (Apk's) manifest file. TF-IDF algorithm is used to calculate the permission value (PV) of each permission and the sensitivity value of apk (SVOA) of each app. The SVOA and the number of the used permissions are learned and tested by machine learning. 6070 benign apps and 9419 malware are used to evaluate the proposed approach. The experiment results show that only use dangerous permissions or the number of used permissions can't accurately distinguish whether an app is malicious or benign. For malware detection, the proposed approach achieve up to 99.5% accuracy and the learning and training time only needs 0.05s. For malware families detection, the accuracy is 99.6%. The results indicate that the method for unknown/new sample's detection accuracy is 92.71%. Compared against other state-of-the-art approaches, the proposed approach is more effective by detecting malware and malware families.

Yuan Hongli, Tang Yongchuan, Sun Wenjuan, Liu Li

2020

General General

Interactive machine learning for fast and robust cell profiling.

In PloS one ; h5-index 176.0

Automated profiling of cell morphology is a powerful tool for inferring cell function. However, this technique retains a high barrier to entry. In particular, configuring image processing parameters for optimal cell profiling is susceptible to cognitive biases and dependent on user experience. Here, we use interactive machine learning to identify the optimum cell profiling configuration that maximises quality of the cell profiling outcome. The process is guided by the user, from whom a rating of the quality of a cell profiling configuration is obtained. We use Bayesian optimisation, an established machine learning algorithm, to learn from this information and automatically recommend the next configuration to examine with the aim of maximising the quality of the processing or analysis. Compared to existing interactive machine learning tools that require domain expertise for per-class or per-pixel annotations, we rely on users' explicit assessment of output quality of the cell profiling task at hand. We validated our interactive approach against the standard human trial-and-error scheme to optimise an object segmentation task using the standard software CellProfiler. Our toolkit enabled rapid optimisation of an object segmentation pipeline, increasing the quality of object segmentation over a pipeline optimised through trial-and-error. Users also attested to the ease of use and reduced cognitive load enabled by our machine learning strategy over the standard approach. We envision that our interactive machine learning approach can enhance the quality and efficiency of pipeline optimisation to democratise image-based cell profiling.

Laux Lisa, Cutiongco Marie F A, Gadegaard Nikolaj, Jensen Bjørn Sand

2020

Public Health Public Health

Contrastive Cross-site Learning with Redesigned Net for COVID-19 CT Classification.

In IEEE journal of biomedical and health informatics

The pandemic of coronavirus disease 2019 (COVID-19) has lead to a global public health crisis spreading hundreds of countries. With the continuous growth of new infections, developing automated tools for COVID-19 identification with CT image is highly desired to assist the clinical diagnosis and reduce the tedious workload of image interpretation. To enlarge the datasets for developing machine learning methods, it is essentially helpful to aggregate the cases from different medical systems for learning robust and generalizable models. This paper proposes a novel joint learning framework to perform accurate COVID-19 identification by effectively learning with heterogeneous datasets with distribution descrepancy.We build a powerful backbone by redesigning the recently proposed COVID-Net in aspects of network architecture and learning strategy to improve the prediction accuracy and learning efficiency. On top of our improved backbone, we further explicitly tackle the cross-site domain shift by conducting separate feature normalization in latent space. Moreover, we propose a contrastive training objective to enhance the domain invariance of semantic embeddings for boosting the classification performance on each dataset. We develop and evaluate our method with two public large-scale COVID-19 diagnosis datasets from real CT images. Extensive experiments show that our approach consistently improves the performanceson both datasets, outperforming the original COVID-Net trained on each dataset by 12.16% and 14.23% in AUC respectively, also exceeding existing state-of-the-art multi-site learning methods.

Wang Zhao, Liu Quande, Dou Qi

2020-Sep-10

General General

Image Quality Enhancement Using a Deep Neural Network for Plane Wave Medical Ultrasound Imaging.

In IEEE transactions on ultrasonics, ferroelectrics, and frequency control

Plane wave imaging (PWI), a typical ultrafast medical ultrasound imaging mode, adopts single plane wave emission without focusing to achieve a high frame rate. However, the imaging quality is severely degraded in comparison with the commonly used focused line scan mode. Conventional adaptive beamformers can improve imaging quality at the cost of additional computation. In this paper, we propose to use a deep neural network (DNN) to enhance the performance of PWI while maintaining a high frame rate. In particular, the PWI response from a single point target is used as the network input, while the focused scan response from the same point serves as the desired output, which is the main contribution of this method. To evaluate the performance of the proposed method, simulations, phantom experiments and in vivo studies are conducted. The delay-and-sum (DAS), the coherence factor (CF), a previously proposed deep learning-based method and the DAS with focused scan are used for comparison. Numerical metrics, including the contrast ratio (CR), the contrast-to-noise ratio (CNR) and the speckle signal-to-noise ratio (sSNR), are used to quantify the performance. The results indicate that the proposed method can achieve superior resolution and contrast performance. Specifically, the proposed method performs better than the DAS in all metrics. Although the CF provides a higher CR, its CNR and sSNR are much lower than those of the proposed method. The overall performance is also better than that of the previous deep learning method and at the same level with focused scan performance. Additionally, in comparison with the DAS, the proposed method requires little additional computation, which ensures high temporal resolution. These results validate that the proposed method can achieve a high imaging quality while maintaining the high frame rate associated with PWI.

Qi Yanxing, Guo Yi, Wang Yuanyuan

2020-Sep-10

General General

Unpaired Training of Deep Learning tMRA for Flexible Spatio-Temporal Resolution.

In IEEE transactions on medical imaging ; h5-index 74.0

Time-resolved MR angiography (tMRA) has been widely used for dynamic contrast enhanced MRI (DCE-MRI) due to its highly accelerated acquisition. In tMRA, the periphery of the k-space data are sparsely sampled so that neighbouring frames can be merged to construct one temporal frame. However, this view-sharing scheme fundamentally limits the temporal resolution, and it is not possible to change the view-sharing number to achieve different spatio-temporal resolution trade-offs. Although many deep learning approaches have been recently proposed for MR reconstruction from sparse samples, the existing approaches usually require matched fully sampled k-space reference data for supervised training, which is not suitable for tMRA due to the lack of high spatio-temporal resolution ground-truth images. To address this problem, here we propose a novel unpaired training scheme for deep learning using optimal transport driven cycle-consistent generative adversarial network (cycleGAN). In contrast to the conventional cycleGAN with two pairs of generator and discriminator, the new architecture requires just a single pair of generator and discriminator, which makes the training much simpler but still improves the performance. Reconstruction results using in vivo tMRA and simulation data set confirm that the proposed method can immediately generate high quality reconstruction results at various choices of view-sharing numbers, allowing us to exploit better trade-off between spatial and temporal resolution in time-resolved MR angiography.

Cha Eunju, Chung Hyungjin, Kim Eung Yeop, Ye Jong Chul

2020-Sep-11

General General

End-to-End Fovea Localisation in Colour Fundus Images with a Hierarchical Deep Regression Network.

In IEEE transactions on medical imaging ; h5-index 74.0

Accurately locating the fovea is a prerequisite for developing computer aided diagnosis (CAD) of retinal diseases. In colour fundus images of the retina, the fovea is a fuzzy region lacking prominent visual features and this makes it difficult to directly locate the fovea. While traditional methods rely on explicitly extracting image features from the surrounding structures such as the optic disc and various vessels to infer the position of the fovea, deep learning based regression technique can implicitly model the relation between the fovea and other nearby anatomical structures to determine the location of the fovea in an end-to-end fashion. Although promising, using deep learning for fovea localisation also has many unsolved challenges. In this paper, we present a new end-to-end fovea localisation method based on a hierarchical coarse-to-fine deep regression neural network. The innovative features of the new method include a multi-scale feature fusion technique and a self-attention technique to exploit location, semantic, and contextual information in an integrated framework, a multi-field-of-view (multi-FOV) feature fusion technique for context-aware feature learning and a Gaussian-shift-cropping method for augmenting effective training data. We present extensive experimental results on two public databases and show that our new method achieved state-of-the-art performances. We also present a comprehensive ablation study and analysis to demonstrate the technical soundness and effectiveness of the overall framework and its various constituent components.

Xie Ruitao, Liu Jingxin, Cao Rui, Qiu Connor S, Duan Jiang, Garibaldi Jon, Qiu Guoping

2020-Sep-10