ArXiv Preprint
Cell detection is a fundamental task in computational pathology that can be
used for extracting high-level medical information from whole-slide images. For
accurate cell detection, pathologists often zoom out to understand the
tissue-level structures and zoom in to classify cells based on their morphology
and the surrounding context. However, there is a lack of efforts to reflect
such behaviors by pathologists in the cell detection models, mainly due to the
lack of datasets containing both cell and tissue annotations with overlapping
regions. To overcome this limitation, we propose and publicly release OCELOT, a
dataset purposely dedicated to the study of cell-tissue relationships for cell
detection in histopathology. OCELOT provides overlapping cell and tissue
annotations on images acquired from multiple organs. Within this setting, we
also propose multi-task learning approaches that benefit from learning both
cell and tissue tasks simultaneously. When compared against a model trained
only for the cell detection task, our proposed approaches improve cell
detection performance on 3 datasets: proposed OCELOT, public TIGER, and
internal CARP datasets. On the OCELOT test set in particular, we show up to
6.79 improvement in F1-score. We believe the contributions of this paper,
including the release of the OCELOT dataset at
https://lunit-io.github.io/research/publications/ocelot are a crucial starting
point toward the important research direction of incorporating cell-tissue
relationships in computation pathology.
Jeongun Ryu, Aaron Valero Puche, JaeWoong Shin, Seonwook Park, Biagio Brattoli, Jinhee Lee, Wonkyung Jung, Soo Ick Cho, Kyunghyun Paeng, Chan-Young Ock, Donggeun Yoo, Sérgio Pereira
2023-03-23