In JCO clinical cancer informatics
PURPOSE : We developed a system to automate analysis of the clinical oncology scientific literature from bibliographic databases and match articles to specific patient cohorts to answer specific questions regarding the efficacy of a treatment. The approach attempts to replicate a clinician's mental processes when reviewing published literature in the context of a patient case. We describe the system and evaluate its performance.
METHODS : We developed separate ground truth data sets for each of the tasks described in the paper. The first ground truth was used to measure the natural language processing (NLP) accuracy from approximately 1,300 papers covering approximately 3,100 statements and approximately 25 concepts; performance was evaluated using a standard F1 score. The ground truth for the expert classifier model was generated by dividing papers cited in clinical guidelines into a training set and a test set in an 80:20 ratio, and performance was evaluated for accuracy, sensitivity, and specificity.
RESULTS : The NLP models were able to identify individual attributes with a 0.7-0.9 F1 score, depending on the attribute of interest. The expert classifier machine learning model was able to classify the individual records with a 0.93 accuracy (95% CI, 0.9 to 0.96, P < .0001), and sensitivity and specificity of 0.95 and 0.91, respectively. Using a decision boundary of 0.5 for the positive (expert) label, the classifier demonstrated an F1 score of 0.92.
CONCLUSION : The system identified and extracted evidence from the oncology literature with a high degree of accuracy, sensitivity, and specificity. This tool enables timely access to the most relevant biomedical literature, providing critical support to evidence-based practice in areas of rapidly evolving science.
Saiz Fernando Suarez, Sanders Corey, Stevens Rick, Nielsen Robert, Britt Michael, Yuravlivker Leemor, Preininger Anita M, Jackson Gretchen P