ArXiv Preprint
The integration of artificial intelligence into digital pathology has the
potential to automate and improve various tasks, such as image analysis and
diagnostic decision-making. Yet, the inherent variability of tissues, together
with the need for image labeling, lead to biased datasets that limit the
generalizability of algorithms trained on them. One of the emerging solutions
for this challenge is synthetic histological images. However, debiasing real
datasets require not only generating photorealistic images but also the ability
to control the features within them. A common approach is to use generative
methods that perform image translation between semantic masks that reflect
prior knowledge of the tissue and a histological image. However, unlike other
image domains, the complex structure of the tissue prevents a simple creation
of histology semantic masks that are required as input to the image translation
model, while semantic masks extracted from real images reduce the process's
scalability. In this work, we introduce a scalable generative model, coined as
DEPAS, that captures tissue structure and generates high-resolution semantic
masks with state-of-the-art quality. We demonstrate the ability of DEPAS to
generate realistic semantic maps of tissue for three types of organs: skin,
prostate, and lung. Moreover, we show that these masks can be processed using a
generative image translation model to produce photorealistic histology images
of two types of cancer with two different types of staining techniques.
Finally, we harness DEPAS to generate multi-label semantic masks that capture
different cell types distributions and use them to produce histological images
with on-demand cellular features. Overall, our work provides a state-of-the-art
solution for the challenging task of generating synthetic histological images
while controlling their semantic information in a scalable way.
Ariel Larey, Nati Daniel, Eliel Aknin, Yael Fisher, Yonatan Savir
2023-02-13