3D instance segmentation for unlabeled imaging modalities is a challenging but essential task as collecting expert annotation can be expensive and time-consuming. Existing works segment a new modality by either deploying pre-trained models optimized on diverse training data or sequentially conducting image translation and segmentation with two relatively independent networks. In this work, we propose a novel Cyclic Segmentation Generative Adversarial Network (CySGAN) that conducts image translation and instance segmentation simultaneously using a unified network with weight sharing. Since the image translation layer can be removed at inference time, our proposed model does not introduce additional computational cost upon a standard segmentation model. For optimizing CySGAN, besides the CycleGAN losses for image translation and supervised losses for the annotated source domain, we also utilize self-supervised and segmentation-based adversarial objectives to enhance the model performance by leveraging unlabeled target domain images. We benchmark our approach on the task of 3D neuronal nuclei segmentation with annotated electron microscopy (EM) images and unlabeled expansion microscopy (ExM) data. The proposed CySGAN outperforms pre-trained generalist models, feature-level domain adaptation models, and the baselines that conduct image translation and segmentation sequentially. Our implementation and the newly collected, densely annotated ExM zebrafish brain nuclei dataset, named NucExM, are publicly available at https://connectomics-bazaar.github.io/proj/CySGAN/index.html. 
                        more » 
                        « less   
                    
                            
                            Weakly Supervised Deep Nuclei Segmentation using Points Annotation in Histopathology Images
                        
                    
    
            Nuclei segmentation is a fundamental task in histopathological image analysis. Typically, such segmentation tasks require significant effort to manually generate pixel-wise annotations for fully supervised training. To alleviate the manual effort, in this paper we propose a novel approach using points only annotation. Two types of coarse labels with complementary information are derived from the points annotation, and are then utilized to train a deep neural network. The fully- connected conditional random field loss is utilized to further refine the model without introducing extra computational complexity during inference. Experimental results on two nuclei segmentation datasets reveal that the proposed method is able to achieve competitive performance compared to the fully supervised counterpart and the state-of-the-art methods while requiring significantly less annotation effort. Our code is publicly available. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1747778
- PAR ID:
- 10105317
- Date Published:
- Journal Name:
- Proceedings of Machine Learning Research
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Abstract Segmentation of structural components in infrastructure inspection images is crucial for automated and accurate condition assessment. While deep neural networks hold great potential for this task, existing methods typically require fully annotated ground truth masks, which are time‐consuming and labor‐intensive to create. This paper introducesScribble‐supervised StructuralComponent SegmentationNetwork (ScribCompNet), the first weakly‐supervised method requiring only scribble annotations for multiclass structural component segmentation. ScribCompNet features a dual‐branch architecture with higher‐resolution refinement to enhance fine detail detection. It extends supervision from labeled to unlabeled pixels through a combined objective function, incorporating scribble annotation, dynamic pseudo label, semantic context enhancement, and scale‐adaptive harmony losses. Experimental results show that ScribCompNet outperforms other scribble‐supervised methods and most fully‐supervised counterparts, achieving 90.19% mean intersection over union (mIoU) with an 80% reduction in labeling time. Further evaluations confirm the effectiveness of the novel designs and robust performance, even with lower‐quality scribble annotations.more » « less
- 
            The Segment Anything Model (SAM) is a recently proposed prompt-based segmentation model in a generic zero-shot segmentation approach. With the zero-shot segmentation capacity, SAM achieved impressive flexibility and precision on various segmentation tasks. However, the current pipeline requires manual prompts during the inference stage, which is still resource intensive for biomedical image segmentation. In this paper, instead of using prompts during the inference stage, we introduce a pipeline that utilizes the SAM, called all-in-SAM, through the entire AI development workflow (from annotation generation to model finetuning) without requiring manual prompts during the inference stage. Specifically, SAM is first employed to generate pixel-level annotations from weak prompts (e.g., points, bounding box). Then, the pixel-level annotations are used to finetune the SAM segmentation model rather than training from scratch. Our experimental results reveal two key findings: 1) the proposed pipeline surpasses the state-of-the-art (SOTA) methods in a nuclei segmentation task on the public Monuseg dataset, and 2) the utilization of weak and few annotations for SAM finetuning achieves competitive performance compared to using strong pixel-wise annotated data.more » « less
- 
            This paper presents a semi-supervised learning framework for a customized semantic segmentation task using multiview image streams. A key challenge of the customized task lies in the limited accessibility of the labeled data due to the requirement of prohibitive manual annotation effort. We hypothesize that it is possible to leverage multiview image streams that are linked through the underlying 3D geometry, which can provide an additional supervisionary signal to train a segmentation model. We formulate a new cross-supervision method using a shape belief transfer---the segmentation belief in one image is used to predict that of the other image through epipolar geometry analogous to shape-from-silhouette. The shape belief transfer provides the upper and lower bounds of the segmentation for the unlabeled data where its gap approaches asymptotically to zero as the number of the labeled views increases. We integrate this theory to design a novel network that is agnostic to camera calibration, network model, and semantic category and bypasses the intermediate process of suboptimal 3D reconstruction. We validate this network by recognizing a customized semantic category per pixel from realworld visual data including non-human species and a subject of interest in social videos where attaining large-scale annotation data is infeasible.more » « less
- 
            Nuclei segmentation and classification are two important tasks in the histopathology image analysis, because the mor- phological features of nuclei and spatial distributions of dif- ferent types of nuclei are highly related to cancer diagnosis and prognosis. Existing methods handle the two problems independently, which are not able to obtain the features and spatial heterogeneity of different types of nuclei at the same time. In this paper, we propose a novel deep learning based method which solves both tasks in a unified framework. It can segment individual nuclei and classify them into tumor, lymphocyte and stroma nuclei. Perceptual loss is utilized to enhance the segmentation of details. We also take advantages of transfer learning to promote the training of deep neural net- works on a relatively small lung cancer dataset. Experimental results prove the effectiveness of the proposed method. The code is publicly availablemore » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    