Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

Automatic classification of autism spectrum disorder in children using cortical thickness and support vector machine.

In Brain and behavior

OBJECTIVE : Autism spectrum disorder (ASD) is a neurodevelopmental condition with a heterogeneous phenotype. The role of biomarkers in ASD diagnosis has been highlighted; cortical thickness has proved to be involved in the etiopathogenesis of ASD core symptoms. We apply support vector machine, a supervised machine learning method, in order to identify specific cortical thickness alterations in ASD subjects.

METHODS : A sample of 76 subjects (9.5 ± 3.4 years old) has been selected, 40 diagnosed with ASD and 36 typically developed subjects. All children underwent a magnetic resonance imaging (MRI) examination; T1-MPRAGE sequences were analyzed to extract features for the characterization and parcellation of regions of interests (ROI); average cortical thickness (CT) has been measured for each ROI. For the classification process, the extracted features were used as input for a classifier to identify ASD subjects through a "learning by example" procedure; the features with best performance was then selected by "greedy forward-feature selection." Finally, this model underwent a leave-one-out cross-validation approach.

RESULTS : From the training set of 68 ROIs, five ROIs reached accuracies of over 70%. After this phase, we used a recursive feature selection process in order to identify the eight features with the best accuracy (84.2%). CT resulted higher in ASD compared to controls in all the ROIs identified at the end of the process.

CONCLUSION : We found increased CT in various brain regions in ASD subjects, confirming their role in the pathogenesis of this condition. Considering the brain development curve during ages, these changes in CT may normalize during development. Further validation on a larger sample is required.

Squarcina Letizia, Nosari Guido, Marin Riccardo, Castellani Umberto, Bellani Marcella, Bonivento Carolina, Fabbro Franco, Molteni Massimo, Brambilla Paolo


autism spectrum disorder, cortical thickness, magnetic resonance imaging, supervised machine learning, support vector machine

General General

Machine learning to analyze single-case graphs: A comparison to visual inspection.

In Journal of applied behavior analysis

Behavior analysts commonly use visual inspection to analyze single-case graphs, but studies on its reliability have produced mixed results. To examine this issue, we compared the Type I error rate and power of visual inspection with a novel approach-machine learning. Five expert visual raters analyzed 1,024 simulated AB graphs, which differed on number of points per phase, autocorrelation, trend, variability, and effect size. The ratings were compared to those obtained by the conservative dual-criteria method and two models derived from machine learning. On average, visual raters agreed with each other on only 75% of graphs. In contrast, both models derived from machine learning showed the best balance between Type I error rate and power while producing more consistent results across different graph characteristics. The results suggest that machine learning may support researchers and practitioners in making fewer errors when analyzing single-case graphs, but replications remain necessary.

Lanovaz Marc J, Hranchuk Kieva


AB design, artificial intelligence, machine learning, n-of-1 trial, single-case design, visual analysis

General General

Chromatin loop anchors predict transcript and exon usage.

In Briefings in bioinformatics

Epigenomics and transcriptomics data from high-throughput sequencing techniques such as RNA-seq and ChIP-seq have been successfully applied in predicting gene transcript expression. However, the locations of chromatin loops in the genome identified by techniques such as Chromatin Interaction Analysis with Paired End Tag sequencing (ChIA-PET) have never been used for prediction tasks. Here, we developed machine learning models to investigate if ChIA-PET could contribute to transcript and exon usage prediction. In doing so, we used a large set of transcription factors as well as ChIA-PET data. We developed different Gradient Boosting Trees models according to the different tasks with the integrated datasets from three cell lines, including GM12878, HeLaS3 and K562. We validated the models via 10-fold cross validation, chromosome-split validation and cross-cell validation. Our results show that both transcript and splicing-derived exon usage can be effectively predicted with at least 0.7512 and 0.7459 of accuracy, respectively, on all cell lines from all kinds of validations. Examining the predictive features, we found that RNA Polymerase II ChIA-PET was one of the most important features in both transcript and exon usage prediction, suggesting that chromatin loop anchors are predictive of both transcript and exon usage.

Zhang Yu, Cai Yichao, Roca Xavier, Kwoh Chee Keong, Fullwood Melissa Jane


ChIA-PET, alternative splicing, chromatin loop anchors, exon usage, gene expression, histone modifications, machine learning, transcript

General General

Automatic optical inspection platform for real-time surface defects detection on plane optical components based on semantic segmentation.

In Applied optics

The tendency to increase the accuracy and quality of optical parts inspection can be observed all over the world. The imperfection of manufacturing techniques can cause different defects on the optical component surface, making surface defects inspection a crucial part of the manufacturing of optical components. Currently, the inspection of lenses, filters, mirrors, and other optical components is performed by human inspectors. However, human-based inspections are time-consuming, subjective, and incompatible with a highly efficient high-quality digital workflow. Moreover, they cannot meet the complex criteria of ISO 10110-7 for the quality pass and fail optical element samples. To meet the high demand for high-quality products, intelligent visual inspection systems are being used in many manufacturing processes. Automated surface imperfection detection based on machine learning has become a fascinating and promising area of research, with a great direct impact on different visual inspection applications. In this paper, an optical inspection platform combining parallel deep learning-based image-processing approaches with a high-resolution optomechanical module was developed to detect surface defects on optical plane components. The system involves the mechanical modules, the illumination and imaging modules, and the machine vision algorithm. Dark-field images were acquired using a 2448×2048-pixel line-scanning CMOS camera with 3.45 µm per-pixel resolution. Convolutional neural networks and semantic segmentation were used for a machine vision algorithm to detect and classify defects on captured images of optical bandpass filters. The experimental results on different bandpass filter samples have shown the best performance compared to traditional methods by reaching an impressive detection speed of 0.07 s per image and an overall detection pixel accuracy of 0.923.

Karangwa Jules, Kong Linghua, Yi Dingrong, Zheng Jishi


General General

Research on the avalanche effect of image encryption based on the Cycle-GAN.

In Applied optics

Aiming at the problem of the weak avalanche effect in the recently proposed deep learning image encryption algorithm, this paper analyzes the causes of weak avalanche effect in the neural network of Cycle-GAN step by step-by-step process and proposes an image encryption algorithm combining the traditional diffusion algorithm and deep learning neural network. In this paper, first, the neural network is used for image scrambling and slight diffusion, and then the traditional diffusion algorithm is used to further diffuse the pixels. The experiment in satellite images shows that our algorithm, with the help of the further diffusion mechanism, can compensate for the weak avalanche effect of Cycle-GAN-based image encryption and can change a pixel value to the original image, and the number of pixel change rate (NPCR) and unified average changing intensity (UACI) values can achieve 99.64% and 33.49%, respectively. In addition, our method can effectively encrypt the image where the encrypted image with high information entropy and low pixel correlation is obtained. The experiment on data loss and noise attack declares our method can identify the types and intensity of attacks. What is more, the key space is big enough, and the key sensitivity is high while the key has a certain randomness.

Bao Zhenjie, Xue Ru


General General

Transport-based pattern recognition versus deep neural networks in underwater OAM communications.

In Journal of the Optical Society of America. A, Optics, image science, and vision

Comparisons between machine learning and optimal transport-based approaches in classifying images are made in underwater orbital angular momentum (OAM) communications. A model is derived that justifies optimal transport for use in attenuated water environments. OAM pattern demultiplexing is performed using optimal transport and deep neural networks and compared to each other. Additionally, some of the complications introduced by signal attenuation are highlighted. The Radon cumulative distribution transform (R-CDT) is applied to OAM patterns to transform them to a linear subspace. The original OAM images and the R-CDT transformed patterns are used in several classification algorithms, and results are compared. The selected classification algorithms are the nearest subspace algorithm, a shallow convolutional neural network (CNN), and a deep neural network. It is shown that the R-CDT transformed images are more accurate than the original OAM images in pattern classification. Also, the nearest subspace algorithm performs better than the selected CNNs in OAM pattern classification in underwater environments.

Neary Patrick L, Nichols Jonathan M, Watnik Abbie T, Judd K Peter, Rohde Gustavo K, Lindle James R, Flann Nicholas S