Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Radiology. Artificial intelligence

Deep learning models are currently the cornerstone of artificial intelligence in medical imaging. While progress is still being made, the generic technological core of convolutional neural networks (CNNs) has had only modest innovations over the last several years, if at all. There is thus a need for improvement. More recently, transformer networks have emerged that replace convolutions with a complex attention mechanism, and they have already matched or exceeded the performance of CNNs in many tasks. Transformers need very large amounts of training data, even more than CNNs, but obtaining well-curated labeled data is expensive and difficult. A possible solution to this issue would be transfer learning with pretraining on a self-supervised task using very large amounts of unlabeled medical data. This pretrained network could then be fine-tuned on specific medical imaging tasks with relatively modest data requirements. The authors believe that the availability of a large-scale, three-dimension-capable, and extensively pretrained transformer model would be highly beneficial to the medical imaging and research community. In this article, authors discuss the challenges and obstacles of training a very large medical imaging transformer, including data needs, biases, training tasks, network architecture, privacy concerns, and computational requirements. The obstacles are substantial but not insurmountable for resourceful collaborative teams that may include academia and information technology industry partners. © RSNA, 2022 Keywords: Computer-aided Diagnosis (CAD), Informatics, Transfer Learning, Convolutional Neural Network (CNN).

Willemink Martin J, Roth Holger R, Sandfort Veit

2022-Nov

Computer-aided Diagnosis (CAD), Convolutional Neural Network (CNN), Informatics, Transfer Learning