Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

Relation Extraction from Biomedical and Clinical Text: Unified Multitask Learning Framework.

In IEEE/ACM transactions on computational biology and bioinformatics

To minimize the accelerating amount of time invested on the biomedical literature search, numerous approaches for automated knowledge extraction have been proposed. Relation extraction is one such task where semantic relations between the entities are identified from the free text. In the biomedical domain, extraction of regulatory pathways, metabolic processes, adverse drug reaction or disease models necessitates knowledge from the individual relations, for example, physical or regulatory interactions between proteins, drugs, disease or phenotype. We study the relation extraction from three major biomedical and clinical tasks, namely drug-drug interactions, protein-protein interaction, and medical concept relation extraction. Towards this, we model the relation extraction problem in a multi-task learning (MTL) framework, and introduce for the first time the concept of structured self-attentive network complemented with the adversarial learning approach in the prediction of relationships from the biomedical and clinical text. Additionally, we also generate the highly efficient single task model which exploits the shortest dependency path embedding learned over the attentive gated recurrent unit to compare our proposed MTL models. The framework we propose significantly improves over all the baselines (deep learning techniques) and single-task models for predicting the relationships, without compromising on the performance of all the tasks.

Yadav Shweta, Ramesh Srivastsa, Saha Sriparna, Ekbal Asif

2020-Aug-27

General General

Thyroid imaging reporting and data system (TIRADS) for ultrasound features of nodules: multicentric retrospective study in China.

In Endocrine

PURPOSE : To establish a practical and simplified Chinese thyroid imaging reporting and data system (C-TIRADS) based on the Chinese patient database.

METHODS : A total of 2141 thyroid nodules that were neither cystic nor spongy were used in the current study. These specimens were derived from 2141 patients in 131 alliance hospitals of the Chinese Artificial Intelligence Alliance for Thyroid and Breast Ultrasound. The ultrasound features, including location, orientation, margin, halo, composition, echogenicity, echotexture, echogenic foci and posterior features were assessed. Univariate and multivariate analyses were performed to investigate the association between ultrasound features and malignancy. The regression equation, the weighting, and the counting methods were used to determine the malignant risk of the thyroid nodules. The areas under the receiver operating characteristic curve (Az values) were calculated.

RESULTS : Of the 2141 thyroid nodules, 1572 were benign, 565 were malignant, and 4 were borderline. Vertical orientation, ill-defined, or irregular margin (including extrathyroidal extension), microcalcifications, solid, and markedly hypoechoic were positively associated with malignancy, while comet-tail artifacts were negatively associated with malignancy. The logistic regression equation yielded the highest Az value of 0.913, which was significantly higher than that obtained using the weighting method (0.893) and the counting method (0.890); however, no significant difference was found between the latter two. The C-TIRADS, based on the counting method, was designed following the principle of balancing the diagnostic performance and sensitivity of the risk stratification with the ease of use.

CONCLUSIONS : A relatively simple C-TIRADS was established using the counting value of positive and negative ultrasound features.

Zhou JianQiao, Song YanYan, Zhan WeiWei, Wei Xi, Zhang Sheng, Zhang RuiFang, Gu Ying, Chen Xia, Shi Liying, Luo XiaoMao, Yang LiChun, Li QiaoYing, Bai BaoYan, Ye XinHua, Zhai Hong, Zhang Hua, Jia XiaoHong, Dong YiJie, Zhang JingWen, Yang ZhiFang, Zhang HuiTing, Zheng Yi, Xu WenWen, Lai LiMei, Yin LiXue

2020-Aug-27

Biopsy, Diagnostic imaging, Fine-needle, Risk assessment, Thyroid nodule, Ultrasonography

General General

AI (Artificial Intelligence) and Hypertension Research.

In Current hypertension reports

PURPOSE OF REVIEW : This review a highlights that to use artificial intelligence (AI) tools effectively for hypertension research, a new foundation to further understand the biology of hypertension needs to occur by leveraging genome and RNA sequencing technology and derived tools on a broad scale in hypertension.

RECENT FINDINGS : For the last few years, progress in research and management of essential hypertension has been stagnating while at the same time, the sequencing of the human genome has been generating many new research tools and opportunities to investigate the biology of hypertension. Cancer research has applied modern tools derived from DNA and RNA sequencing on a large scale, enabling the improved understanding of cancer biology and leading to many clinical applications. Compared with cancer, studies in hypertension, using whole genome, exome, or RNA sequencing tools, total less than 2% of the number cancer studies. While true, sequencing the genome of cancer tissue has provided cancer research an advantage, DNA and RNA sequencing derived tools can also be used in hypertension to generate new understanding how complex protein network, in non-cancer tissue, adapts and learns to be effective when for example, somatic mutations or environmental inputs change the gene expression profiles at different network nodes. The amount of data and differences in clinical condition classification at the individual sample level might be of such magnitude to overwhelm and stretch comprehension. Here is the opportunity to use AI tools for the analysis of data streams derived from DNA and RNA sequencing tools combined with clinical data to generate new hypotheses leading to the discovery of mechanisms and potential target molecules from which drugs or treatments can be developed and tested. Basic and clinical research taking advantage of new gene sequencing-based tools, to uncover mechanisms how complex protein networks regulate blood pressure in health and disease, will be critical to lift hypertension research and management from its stagnation. The use of AI analytic tools will help leverage such insights. However, applying AI tools to vast amounts of data that certainly exist in hypertension, without taking advantage of new gene sequencing-based research tools, will generate questionable results and will miss many new potential molecular targets and possibly treatments. Without such approaches, the vision of precision medicine for hypertension will be hard to accomplish and most likely not occur in the near future.

Mueller Franco B

2020-Aug-27

Artificial intelligence, Cancer and hypertension research publications, Deep machine learning algorithms, Gene and protein networks, Hypertension treatment, Target molecules, Whole genome and RNA sequencing

Radiology Radiology

Preoperative identification of microvascular invasion in hepatocellular carcinoma by XGBoost and deep learning.

In Journal of cancer research and clinical oncology

PURPOSE : Microvascular invasion (MVI) is a valuable predictor of survival in hepatocellular carcinoma (HCC) patients. This study developed predictive models using eXtreme Gradient Boosting (XGBoost) and deep learning based on CT images to predict MVI preoperatively.

METHODS : In total, 405 patients were included. A total of 7302 radiomic features and 17 radiological features were extracted by a radiomics feature extraction package and radiologists, respectively. We developed a XGBoost model based on radiomics features, radiological features and clinical variables and a three-dimensional convolutional neural network (3D-CNN) to predict MVI status. Next, we compared the efficacy of the two models.

RESULTS : Of the 405 patients, 220 (54.3%) were MVI positive, and 185 (45.7%) were MVI negative. The areas under the receiver operating characteristic curves (AUROCs) of the Radiomics-Radiological-Clinical (RRC) Model and 3D-CNN Model in the training set were 0.952 (95% confidence interval (CI) 0.923-0.973) and 0.980 (95% CI 0.959-0.993), respectively (p = 0.14). The AUROCs of the RRC Model and 3D-CNN Model in the validation set were 0.887 (95% CI 0.797-0.947) and 0.906 (95% CI 0.821-0.960), respectively (p = 0.83). Based on the MVI status predicted by the RRC and 3D-CNN Models, the mean recurrence-free survival (RFS) was significantly better in the predicted MVI-negative group than that in the predicted MVI-positive group (RRC Model: 69.95 vs. 24.80 months, p < 0.001; 3D-CNN Model: 64.06 vs. 31.05 months, p = 0.027).

CONCLUSION : The RRC Model and 3D-CNN models showed considerable efficacy in identifying MVI preoperatively. These machine learning models may facilitate decision-making in HCC treatment but requires further validation.

Jiang Yi-Quan, Cao Su-E, Cao Shilei, Chen Jian-Ning, Wang Guo-Ying, Shi Wen-Qi, Deng Yi-Nan, Cheng Na, Ma Kai, Zeng Kai-Ning, Yan Xi-Jing, Yang Hao-Zhen, Huan Wen-Jing, Tang Wei-Min, Zheng Yefeng, Shao Chun-Kui, Wang Jin, Yang Yang, Chen Gui-Hua

2020-Aug-27

Deep learning, Hepatocellular carcinoma, Micro-vascular invasion, Neural network models, Radiomics

Ophthalmology Ophthalmology

Lens-induced myopization and intraocular pressure in young guinea pigs.

In BMC ophthalmology

BACKGROUND : Intraocular pressure (IOP) is an important physiological measure of the eye and is associated with some ocular disorders. We aimed to assess the influence of topical beta blocker-induced IOP reduction on lens-induced axial elongation in young guinea pigs.

METHODS : The experimental study included 20 pigmented guinea pigs (age: 2-3 weeks). Myopia was induced in the right eyes for 5 weeks with - 10 diopter lenses. The right eyes additionally received either one drop of carteolol 2% (study group, n = 10) or one drop of artificial tears daily (control group, n = 10), while the contralateral eyes of all animals remained untouched. The outcome parameter was axial elongation during the follow-up period. The mean of all IOP measurements taken during the study was referred to as mean IOP.

RESULTS : Greater axial elongation was associated with a shorter axial length at baseline (P < 0.001; standardized regression coefficient beta: - 0.54) and lens-induced myopization (P < 0.001; beta: 0.55). In the multivariable model, axial elongation was not significantly correlated with the IOP at study end (P = 0.59), the mean IOP during the study period (P = 0.12), the mean of all IOP measurements (P = 0.17), the difference between the IOP at study end and baseline IOP (P = 0.38), the difference between the mean IOP during the study period and the baseline IOP (P = 0.11), or the application of carteolol eye drops versus artificial tears eye drops (P = 0.07). The univariate analysis of the relationships between axial elongation and the IOP parameters yielded similar results. The inter-eye difference between the right eye and the left eye in axial elongation was significantly associated with the inter-eye difference in baseline axial length (P = 0.001; beta:-0.67) but not significantly correlated with the inter-eye difference in any of the IOP-related parameters (all P > 0.25).

CONCLUSIONS : In young guinea pigs with or without lens-induced axial elongation, neither the physiological IOP nor the IOP reduced by carteolol, a topical beta-blocker, was associated with the magnitude of axial elongation. These results suggest that IOP, regardless of whether it is influenced by carteolol, does not play a major role in axial elongation in young guinea pigs.

Dong Li, Li Yi Fan, Wu Hao Tian, Di Kou Hai, Lan Yin Jun, Wang Ya Xing, Jonas Jost B, Wei Wen Bin

2020-Aug-25

Axial length, Beta-blocker, Intraocular pressure, Myopia, Refractive error

Dermatology Dermatology

Evaluation of the Diagnostic Accuracy of an Online Artificial Intelligence Application for Skin Disease Diagnosis.

In Acta dermato-venereologica ; h5-index 37.0

Artificial intelligence (AI) algorithms for automated classification of skin diseases are available to the consumer market. Studies of their diagnostic accuracy are rare. We assessed the diagnostic accuracy of an open-access AI application (Skin Image Search™) for recognition of skin diseases. Clinical images including tumours, infective and inflammatory skin diseases were collected at the Department of Dermatology at the Sahlgrenska University Hospital and uploaded for classification by the online application. The AI algorithm classified the images giving 5 differential diagnoses, which were then compared to the diagnoses made clinically by the dermatologists and/or histologically. We included 521 images portraying 26 diagnoses. The diagnostic accuracy was 56.4% for the top 5 suggested diagnoses and 22.8% when only considering the most probable diagnosis. The level of diagnostic accuracy varied considerably for diagnostic groups. The online application demonstrated low diagnostic accuracy compared to a dermatologist evaluation and needs further development.

Zaar Oscar, Larson Alexander, Polesie Sam, Saleh Karim, Tarstedt Mikael, Olives Antonio, Suárez Andrea, Gillstedt Martin, Neittaanmäki Noora

2020-Aug-27

** dermatology, online diagnostics, skin disease, artificial intelligence**