Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Modern pathology : an official journal of the United States and Canadian Academy of Pathology, Inc

Image analysis assistance with artificial intelligence (AI) has become one of the great promises over recent years in pathology, with many scientific studies being published each year. Nonetheless, and perhaps surprisingly, only few image AI systems are already in routine clinical use. A major reason for this is the missing validation of the robustness of many AI systems: beyond a narrow context, the large variability in digital images due to differences in preanalytical laboratory procedures, staining procedures, and scanners can be challenging for the subsequent image analysis. Resulting faulty AI analysis may bias the pathologist and contribute to incorrect diagnoses and, therefore, may lead to inappropriate therapy or prognosis. In this study, a pretrained AI assistance tool for the quantification of Ki-67, estrogen receptor (ER), and progesterone receptor (PR) in breast cancer was evaluated within a realistic study set representative of clinical routine on a total of 204 slides (72 Ki-67, 66 ER, and 66 PR slides). This represents the cohort with the largest image variance for AI tool evaluation to date, including 3 staining systems, 5 whole-slide scanners, and 1 microscope camera. These routine cases were collected without manual preselection and analyzed by 10 participant pathologists from 8 sites. Agreement rates for individual pathologists were found to be 87.6% for Ki-67 and 89.4% for ER/PR, respectively, between scoring with and without the assistance of the AI tool regarding clinical categories. Individual AI analysis results were confirmed by the majority of pathologists in 95.8% of Ki-67 cases and 93.2% of ER/PR cases. The statistical analysis provides evidence for high interobserver variance between pathologists (Krippendorff's α, 0.69) in conventional immunohistochemical quantification. Pathologist agreement increased slightly when using AI support (Krippendorff α, 0.72). Agreement rates of pathologist scores with and without AI assistance provide evidence for the reliability of immunohistochemical scoring with the support of the investigated AI tool under a large number of environmental variables that influence the quality of the diagnosed tissue images.

Abele Niklas, Tiemann Katharina, Krech Till, Wellmann Axel, Schaaf Christian, Länger Florian, Peters Anja, Donner Andreas, Keil Felix, Daifalla Khalid, Mackens Marina, Mamilos Andreas, Minin Evgeny, Krümmelbein Michel, Krause Linda, Stark Maria, Zapf Antonia, Päpper Marc, Hartmann Arndt, Lang Tobias

2023-Mar

digital pathology, mammary carcinoma, surgical pathology