ArXiv Preprint
The large language model called ChatGPT has drawn extensively attention
because of its human-like expression and reasoning abilities. In this study, we
investigate the feasibility of using ChatGPT in experiments on using ChatGPT to
translate radiology reports into plain language for patients and healthcare
providers so that they are educated for improved healthcare. Radiology reports
from 62 low-dose chest CT lung cancer screening scans and 76 brain MRI
metastases screening scans were collected in the first half of February for
this study. According to the evaluation by radiologists, ChatGPT can
successfully translate radiology reports into plain language with an average
score of 4.1 in the five-point system with 0.07 places of information missing
and 0.11 places of misinformation. In terms of the suggestions provided by
ChatGPT, they are general relevant such as keeping following-up with doctors
and closely monitoring any symptoms, and for about 37% of 138 cases in total
ChatGPT offers specific suggestions based on findings in the report. ChatGPT
also presents some randomness in its responses with occasionally
over-simplified or neglected information, which can be mitigated using a more
detailed prompt. Furthermore, ChatGPT results are compared with a newly
released large model GPT-4, showing that GPT-4 can significantly improve the
quality of translated reports. Our results show that it is feasible to utilize
large language models in clinical education, and further efforts are needed to
address limitations and maximize their potential.
Qing Lyu, Josh Tan, Mike E. Zapadka, Janardhana Ponnatapuram, Chuang Niu, Ge Wang, Christopher T. Whitlow
2023-03-16