Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In medRxiv : the preprint server for health sciences

OBJECTIVES : To assess the performance bias caused by sampling data into training and test sets in a mammography radiomics study.

METHODS : Mammograms from 700 women were used to study upstaging of ductal carcinoma in situ. The dataset was repeatedly shuffled and split into training (n=400) and test cases (n=300) forty times. For each split, cross-validation was used for training, followed by an assessment of the test set. Logistic regression with regularization and support vector machine were used as the machine learning classifiers. For each split and classifier type, multiple models were created based on radiomics and/or clinical features.

RESULTS : Area under the curve (AUC) performances varied considerably across the different data splits (e.g., radiomics regression model: train 0.58-0.70, test 0.59â€"0.73). Performances for regression models showed a tradeoff where better training led to worse testing and vice versa. Cross-validation over all cases reduced this variability, but required samples of 500+ cases to yield representative estimates of performance.

CONCLUSIONS : In medical imaging, clinical datasets are often limited to relatively small size. Models built from different training sets may not be representative of the whole dataset. Depending on the selected data split and model, performance bias could lead to inappropriate conclusions that might influence the clinical significance of the findings. Optimal strategies for test set selection should be developed to ensure study conclusions are appropriate.

Hou Rui, Lo Joseph Y, Marks Jeffrey R, Hwang E Shelley, Grimm Lars J

2023-Feb-23