ArXiv Preprint
The digitization of healthcare has facilitated the sharing and re-using of
medical data but has also raised concerns about confidentiality and privacy.
HIPAA (Health Insurance Portability and Accountability Act) mandates removing
re-identifying information before the dissemination of medical records. Thus,
effective and efficient solutions for de-identifying medical data, especially
those in free-text forms, are highly needed. While various computer-assisted
de-identification methods, including both rule-based and learning-based, have
been developed and used in prior practice, such solutions still lack
generalizability or need to be fine-tuned according to different scenarios,
significantly imposing restrictions in wider use. The advancement of large
language models (LLM), such as ChatGPT and GPT-4, have shown great potential in
processing text data in the medical domain with zero-shot in-context learning,
especially in the task of privacy protection, as these models can identify
confidential information by their powerful named entity recognition (NER)
capability. In this work, we developed a novel GPT4-enabled de-identification
framework ("DeID-GPT") to automatically identify and remove the identifying
information. Compared to existing commonly used medical text data
de-identification methods, our developed DeID-GPT showed the highest accuracy
and remarkable reliability in masking private information from the unstructured
medical text while preserving the original structure and meaning of the text.
This study is one of the earliest to utilize ChatGPT and GPT-4 for medical text
data processing and de-identification, which provides insights for further
research and solution development on the use of LLMs such as ChatGPT/GPT-4 in
healthcare. Codes and benchmarking data information are available at
https://github.com/yhydhx/ChatGPT-API.
Zhengliang Liu, Xiaowei Yu, Lu Zhang, Zihao Wu, Chao Cao, Haixing Dai, Lin Zhao, Wei Liu, Dinggang Shen, Quanzheng Li, Tianming Liu, Dajiang Zhu, Xiang Li
2023-03-20