Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Proceedings of SPIE--the International Society for Optical Engineering

Object-based co-localization of fluorescent signals allows the assessment of interactions between two (or more) biological entities using spatial information. It relies on object identification with high accuracy to separate fluorescent signals from the background. Object detectors using convolutional neural networks (CNN) with annotated training samples could facilitate the process by detecting and counting fluorescent-labeled cells from fluorescence photomicrographs. However, datasets containing segmented annotations of colocalized cells are generally not available, and creating a new dataset with delineated masks is label-intensive. Also, the co-localization coefficient is often not used as a component during training with the CNN model. Yet, it may aid with localizing and detecting objects during training and testing. In this work, we propose to address these issues by using a quantification coefficient for co-localization called Manders overlapping coefficient (MOC)1 as a single-layer branch in a CNN. Fully convolutional one-state (FCOS)2 with a Resnet101 backbone served as the network to evaluate the effectiveness of the novel branch to assist with bounding box prediction. Training data were sourced from lab curated fluorescence images of neurons from the rat hippocampus, piriform cortex, somatosensory cortex, and amygdala. Results suggest that using modified FCOS with MOC outperformed the original FCOS model for accuracy in detecting fluorescence signals by 1.1% in mean average precision (mAP). The model could be downloaded from

Dou Yimeng, Tsai Yi-Hua, Liu Chih-Chieh, Hobson Brad A, Lein Pamela J


Co-localization, Deep learning, Fluorescence microscopy, High-content screening, Object Detection, Pattern recognition and classification