Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

Quantitative biomarkers to predict response to clozapine treatment using resting EEG data.

In Schizophrenia research ; h5-index 61.0

Clozapine is an anti-psychotic drug that is known to be effective in the treatment of patients with chronic treatment-resistant schizophrenia (TRS-SCZ), commonly estimated to be around one third of all cases. However, clinicians sometimes delay the initiation of this drug because of its adverse side-effects. Therefore, identification of predictive biomarkers of clozapine response is extremely valuable to aid on-time initiation of clozapine treatment. In this study, we develop a machine learning (ML) algorithm based on the pre-treatment electroencephalogram (EEG) data sets to predict response to clozapine treatment in TRS-SCZs, where the treatment outcome, after at least one-year follow-up is determined using the Positive and Negative Syndrome Scale (PANSS). The ML algorithm has two steps: 1) an effective connectivity named symbolic transfer entropy (STE) is applied to resting state EEG waveforms, 2) the ML algorithm is applied to STE matrix to determine whether a set of features can be found to discriminate most responder (MR) SCZ patients from least responder (LR) ones. The findings of this study revealed that the STE features could achieve an accuracy of 89.90%. This finding implies that analysis of pre-treatment EEG could contribute to our ability to distinguish MR from LR SCZs, and that the STE matrix may prove to be a promising tool for the prediction of the clinical response to clozapine.

Masychev Kirill, Ciprian Claudio, Ravan Maryam, Manimaran Akshaya, Deshmukh AnkitaAmol


Clozapine treatment, Effective connectivity, Machine learning, Resting state electroencephalography (EEG), Schizophrenia, Symbolic transfer entropy

General General

Expanding the Perseus Software for Omics Data Analysis With Custom Plugins.

In Current protocols in bioinformatics

The Perseus software provides a comprehensive framework for the statistical analysis of large-scale quantitative proteomics data, also in combination with other omics dimensions. Rapid developments in proteomics technology and the ever-growing diversity of biological studies increasingly require the flexibility to incorporate computational methods designed by the user. Here, we present the new functionality of Perseus to integrate self-made plugins written in C#, R, or Python. The user-written codes will be fully integrated into the Perseus data analysis workflow as custom activities. This also makes language-specific R and Python libraries from CRAN (, Bioconductor (, PyPI (, and Anaconda ( accessible in Perseus. The different available approaches are explained in detail in this article. To facilitate the distribution of user-developed plugins among users, we have created a plugin repository for community sharing and filled it with the examples provided in this article and a collection of already existing and more extensive plugins. © 2020 The Authors. Basic Protocol 1: Basic steps for R plugins Support Protocol 1: R plugins with additional arguments Basic Protocol 2: Basic steps for python plugins Support Protocol 2: Python plugins with additional arguments Basic Protocol 3: Basic steps and construction of C# plugins Basic Protocol 4: Basic steps of construction and connection for R plugins with C# interface Support Protocol 4: Advanced example of R Plugin with C# interface: UMAP Basic Protocol 5: Basic steps of construction and connection for python plugins with C# interface Support Protocol 5: Advanced example of python plugin with C# interface: UMAP Support Protocol 6: A basic workflow for the analysis of label-free quantification proteomics data using perseus.

Yu Sung-Huan, Ferretti Daniela, Schessner Julia P, Rudolph Jan Daniel, Borner Georg H H, Cox Jürgen


MaxQuant, Perseus, omics data analysis, plugin development, quantitative proteomics

General General

The clinical characterization of the adult patient with depression aimed at personalization of management.

In World psychiatry : official journal of the World Psychiatric Association (WPA)

Depression is widely acknowledged to be a heterogeneous entity, and the need to further characterize the individual patient who has received this diagnosis in order to personalize the management plan has been repeatedly emphasized. However, the research evidence that should guide this personalization is at present fragmentary, and the selection of treatment is usually based on the clinician's and/or the patient's preference and on safety issues, in a trial-and-error fashion, paying little attention to the particular features of the specific case. This may be one of the reasons why the majority of patients with a diagnosis of depression do not achieve remission with the first treatment they receive. The predominant pessimism about the actual feasibility of the personalization of treatment of depression in routine clinical practice has recently been tempered by some secondary analyses of databases from clinical trials, using approaches such as individual patient data meta-analysis and machine learning, which indicate that some variables may indeed contribute to the identification of patients who are likely to respond differently to various antidepressant drugs or to antidepressant medication vs. specific psychotherapies. The need to develop decision support tools guiding the personalization of treatment of depression has been recently reaffirmed, and the point made that these tools should be developed through large observational studies using a comprehensive battery of self-report and clinical measures. The present paper aims to describe systematically the salient domains that should be considered in this effort to personalize depression treatment. For each domain, the available research evidence is summarized, and the relevant assessment instruments are reviewed, with special attention to their suitability for use in routine clinical practice, also in view of their possible inclusion in the above-mentioned comprehensive battery of measures. The main unmet needs that research should address in this area are emphasized. Where the available evidence allows providing the clinician with specific advice that can already be used today to make the management of depression more personalized, this advice is highlighted. Indeed, some sections of the paper, such as those on neurocognition and on physical comorbidities, indicate that the modern management of depression is becoming increasingly complex, with several components other than simply the choice of an antidepressant and/or a psychotherapy, some of which can already be reliably personalized.

Maj Mario, Stein Dan J, Parker Gordon, Zimmerman Mark, Fava Giovanni A, De Hert Marc, Demyttenaere Koen, McIntyre Roger S, Widiger Thomas, Wittchen Hans-Ulrich


Depression, clinical staging, clinical subtypes, dysfunctional cognitive schemas, early environmental exposures, family history, functioning, neurocognition, personality traits, personalization of treatment, physical comorbidities, protective factors, psychiatric antecedents, psychiatric comorbidities, quality of life, recent environmental exposures, severity, symptom profile

oncology Oncology

Clinical trial design: Past, present, and future in the context of big data and precision medicine.

In Cancer ; h5-index 88.0

Clinical trials are fundamental for advances in cancer treatment. The traditional framework of phase 1 to 3 trials is designed for incremental advances between regimens. However, our ability to understand and treat cancer has evolved with the increase in drugs targeting an expanding array of therapeutic targets, the development of progressively comprehensive data sets, and emerging computational analytics, all of which are reshaping our treatment strategies. A more robust linkage between drugs and underlying cancer biology is blurring historical lines that define trials on the basis of cancer type. The complexity of the molecular basis of cancer, coupled with manifold variations in clinical status, is driving the individually tailored use of combinations of precision targeted drugs. This approach is spawning a new era of clinical trial types. Although most care is delivered in a community setting, large centers support real-time multi-omic analytics and their integrated interpretation by using machine learning in the context of real-world data sets. Coupling the analytic capabilities of large centers to the tailored delivery of therapy in the community is forging a paradigm that is optimizing service for patients. Understanding the importance of these evolving trends across the health care spectrum will affect our treatment of cancer in the future and is the focus of this review.

Li Allen, Bergan Raymond C


big data, clinical trial, clinical trial protocol, precision medicine

Radiology Radiology

The sub-millisievert era in CTCA: the technical basis of the new radiation dose approach.

In La Radiologia medica

Computed tomography coronary angiography (CTCA) has become a cornerstone in the diagnostic process of the heart disease. Although the cardiac imaging with interventional procedures is responsible for approximately 40% of the cumulative effective dose in medical imaging, a relevant radiation dose reduction over the last decade was obtained, with the beginning of the sub-mSv era in CTCA. The main technical basis to obtain a radiation dose reduction in CTCA is the use of a low tube voltage, the adoption of a prospective electrocardiogram-triggering spiral protocol and the application of the tube current modulation with the iterative reconstruction technique. Nevertheless, CTCA examinations are characterized by a wide range of radiation doses between different radiology departments. Moreover, the dose exposure in CTCA is extremely important because the benefit-risk calculus in comparison with other modalities also depends on it. Finally, because anatomical evaluation not adequately predicts the hemodynamic relevance of coronary stenosis, a low radiation dose in routine CTCA would allow the greatest use of the myocardial CT perfusion, fractional flow reserve-CT, dual-energy CT and artificial intelligence, to shift focus from morphological assessment to a comprehensive morphological and functional evaluation of the stenosis. Therefore, the aim of this work is to summarize the correct use of the technical basis in order that CTCA becomes an established examination for assessment of the coronary artery disease with low radiation dose.

Schicchi Nicolò, Fogante Marco, Palumbo Pierpaolo, Agliata Giacomo, Esposto Pirani Paolo, Di Cesare Ernesto, Giovagnoni Andrea


Cardiac CT, Coronary CT, Dual-source CT, High-pitch protocol, Radiation dose, Radiation reduction

General General

Accurate and efficient structure-based computational mutagenesis for modeling fluorescence levels of Aequorea victoria green fluorescent protein mutants.

In Protein engineering, design & selection : PEDS

A computational mutagenesis technique was used to characterize the structural effects associated with over 46 000 single and multiple amino acid variants of Aequorea victoria green fluorescent protein (GFP), whose functional effects (fluorescence levels) were recently measured by experimental researchers. For each GFP mutant, the approach generated a single score reflecting the overall change in sequence-structure compatibility relative to native GFP, as well as a vector of environmental perturbation (EP) scores characterizing the impact at all GFP residue positions. A significant GFP structure-function relationship (P < 0.0001) was elucidated by comparing the sequence-structure compatibility scores with the functional data. Next, the computed vectors for GFP mutants were used to train predictive models of fluorescence by implementing random forest (RF) classification and tree regression machine learning algorithms. Classification performance reached 0.93 for sensitivity, 0.91 for precision and 0.90 for balanced accuracy, and regression models led to Pearson's correlation as high as r = 0.83 between experimental and predicted GFP mutant fluorescence. An RF model trained on a subset of over 1000 experimental single residue GFP mutants with measured fluorescence was used for predicting the 3300 remaining unstudied single residue mutants, with results complementing known GFP biochemical and biophysical properties. In addition, models trained on the subset of experimental GFP mutants harboring multiple residue replacements successfully predicted fluorescence of the single residue GFP mutants. The models developed for this study were accurate and efficient, and their predictions outperformed those of several related state-of-the-art methods.

Masso Majid


GFP, machine learning, prediction, structure–function relationships