skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on February 1, 2026

Title: IDCC-SAM: A Zero-Shot Approach for Cell Counting in Immunocytochemistry Dataset Using the Segment Anything Model
Cell counting in immunocytochemistry is vital for biomedical research, supporting the diagnosis and treatment of diseases such as neurological disorders, autoimmune conditions, and cancer. However, traditional counting methods are manual, time-consuming, and error-prone, while deep learning solutions require costly labeled datasets, limiting scalability. We introduce the Immunocytochemistry Dataset Cell Counting with Segment Anything Model (IDCC-SAM), a novel application of the Segment Anything Model (SAM), designed to adapt the model for zero-shot-based cell counting in fluorescent microscopic immunocytochemistry datasets. IDCC-SAM leverages Meta AI’s SAM, pre-trained on 11 million images, to eliminate the need for annotations, enhancing scalability and efficiency. Evaluated on three public datasets (IDCIA, ADC, and VGG), IDCC-SAM achieved the lowest Mean Absolute Error (26, 28, 52) on VGG and ADC and the highest Acceptable Absolute Error (28%, 26%, 33%) across all datasets, outperforming state-of-the-art supervised models like U-Net and Mask R-CNN, as well as zero-shot benchmarks like NP-SAM and SAM4Organoid. These results demonstrate IDCC-SAM’s potential to improve cell-counting accuracy while reducing reliance on specialized models and manual annotations.  more » « less
Award ID(s):
2152117
PAR ID:
10617123
Author(s) / Creator(s):
; ;
Publisher / Repository:
MDPI
Date Published:
Journal Name:
Bioengineering
Volume:
12
Issue:
2
ISSN:
2306-5354
Page Range / eLocation ID:
184
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The Segment Anything Model (SAM) is a recently proposed prompt-based segmentation model in a generic zero-shot segmentation approach. With the zero-shot segmentation capacity, SAM achieved impressive flexibility and precision on various segmentation tasks. However, the current pipeline requires manual prompts during the inference stage, which is still resource intensive for biomedical image segmentation. In this paper, instead of using prompts during the inference stage, we introduce a pipeline that utilizes the SAM, called all-in-SAM, through the entire AI development workflow (from annotation generation to model finetuning) without requiring manual prompts during the inference stage. Specifically, SAM is first employed to generate pixel-level annotations from weak prompts (e.g., points, bounding box). Then, the pixel-level annotations are used to finetune the SAM segmentation model rather than training from scratch. Our experimental results reveal two key findings: 1) the proposed pipeline surpasses the state-of-the-art (SOTA) methods in a nuclei segmentation task on the public Monuseg dataset, and 2) the utilization of weak and few annotations for SAM finetuning achieves competitive performance compared to using strong pixel-wise annotated data. 
    more » « less
  2. The segment anything model (SAM) was released as a foundation model for image segmentation. The promptable segmentation model was trained by over 1 billion masks on 11M licensed and privacy-respecting images. The model supports zero-shot image segmentation with various seg- mentation prompts (e.g., points, boxes, masks). It makes the SAM attractive for medical image analysis, especially for digital pathology where the training data are rare. In this study, we eval- uate the zero-shot segmentation performance of SAM model on representative segmentation tasks on whole slide imaging (WSI), including (1) tumor segmentation, (2) non-tumor tissue segmen- tation, (3) cell nuclei segmentation. Core Results: The results suggest that the zero-shot SAM model achieves remarkable segmentation performance for large connected objects. However, it does not consistently achieve satisfying performance for dense instance object segmentation, even with 20 prompts (clicks/boxes) on each image. We also summarized the identified limitations for digital pathology: (1) image resolution, (2) multiple scales, (3) prompt selection, and (4) model fine-tuning. In the future, the few-shot fine-tuning with images from downstream pathological seg- mentation tasks might help the model to achieve better performance in dense object segmentation. 
    more » « less
  3. In geographical image segmentation, performance is often constrained by the limited availability of training data and a lack of generalizability, particularly for segmenting mobility infrastructure such as roads, sidewalks, and crosswalks. Vision foundation models like the Segment Anything Model (SAM), pre-trained on millions of natural images, have demonstrated impressive zero-shot segmentation performance, providing a potential solution. However, SAM struggles with geographical images, such as aerial and satellite imagery, due to its training being confined to natural images and the narrow features and textures of these objects blending into their surroundings. To address these challenges, we propose Geographical SAM (GeoSAM), a SAM-based framework that fine-tunes SAM using automatically generated multi-modal prompts. Specifically, GeoSAM integrates point prompts from a pre-trained task-specific model as primary visual guidance, and text prompts generated by a large language model as secondary semantic guidance, enabling the model to better capture both spatial structure and contextual meaning. GeoSAM outperforms existing approaches for mobility infrastructure segmentation in both familiar and completely unseen regions by at least 5% in mIoU, representing a significant leap in leveraging foundation models to segment mobility infrastructure, including both road and pedestrian infrastructure in geographical images. The source code is publicly available. 
    more » « less
  4. Tumor segmentation in medical imaging is critical for diagnosis, treatment planning, and prognosis, yet remains challenging due to limited annotated data, tumor heterogeneity, and modality-specific complexities in CT, MRI, and histopathology. Although the Segment Anything Model (SAM) shows promise as a zero-shot learner, it struggles with irregular tumor boundaries and domain-specific variations. We introduce the Adaptive Unified Segmentation Anything Model (AUSAM). This novel framework extends SAM’s capabilities for multi-modal tumor segmentation by integrating an intelligent prompt module, dynamic sampling, and stage-based thresholding. Specifically, clustering-based prompt learning (DBSCAN for CT/MRI and K-means for histopathology) adaptively allocates prompts to capture challenging tumor regions, while entropy-guided sampling and dynamic thresholding systematically reduce annotation requirements and computational overhead. Validated on diverse benchmarks—LiTS (CT), FLARE 2023 (CT/MRI), ORCA, and OCDC (histopathology)—AUSAM achieves state-of-the-art Dice Similarity Coefficients (DSC) of 94.25%, 91.84%, 87.59%, and 91.84%, respectively, with significantly reduced data usage. As the first framework to adapt SAM for multi-modal tumor segmentation, AUSAM sets a new standard for precision, scalability, and efficiency. It is offered in two variants: AUSAM-Lite for resource-constrained environments and AUSAM-Max for maximum segmentation accuracy, thereby advancing medical imaging and clinical decision-making. 
    more » « less
  5. This paper assesses trending AI foundation models, especially emerging computer vision foundation models and their performance in natural landscape feature segmentation. While the term foundation model has quickly garnered interest from the geospatial domain, its definition remains vague. Hence, this paper will first introduce AI foundation models and their defining characteristics. Built upon the tremendous success achieved by Large Language Models (LLMs) as the foundation models for language tasks, this paper discusses the challenges of building foundation models for geospatial artificial intelligence (GeoAI) vision tasks. To evaluate the performance of large AI vision models, especially Meta’s Segment Anything Model (SAM), we implemented different instance segmentation pipelines that minimize the changes to SAM to leverage its power as a foundation model. A series of prompt strategies were developed to test SAM’s performance regarding its theoretical upper bound of predictive accuracy, zero-shot performance, and domain adaptability through fine-tuning. The analysis used two permafrost feature datasets, ice-wedge polygons and retrogressive thaw slumps because (1) these landform features are more challenging to segment than man-made features due to their complicated formation mechanisms, diverse forms, and vague boundaries; (2) their presence and changes are important indicators for Arctic warming and climate change. The results show that although promising, SAM still has room for improvement to support AI-augmented terrain mapping. The spatial and domain generalizability of this finding is further validated using a more general dataset EuroCrops for agricultural field mapping. Finally, we discuss future research directions that strengthen SAM’s applicability in challenging geospatial domains. 
    more » « less