<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Conference Paper</dc:product_type><dc:title>CellFMCount: A Fluorescence Microscopy Dataset, Benchmark, and Methods for Cell Counting</dc:title><dc:creator>Mohammed, Abdurahman Ali [Department of Computer Science]; Fonder, Catherine [Development, and Cell Biology,Department of Genetics]; Wei, Ying [Department of Computer Science]; Tavanapong, Wallapak [Department of Computer Science]; Sakaguchi, Donald S [Development, and Cell Biology,Department of Genetics]; Li, Qi [Department of Computer Science]; Mallapragada, Surya K [Development, and Cell Biology,Department of Genetics]</dc:creator><dc:corporate_author/><dc:editor/><dc:description>Abstract—Accurate cell counting is essential in various biomedical research and clinical applications, including cancer diagnosis,
stem cell research, and immunology. Manual counting is labor-intensive and error-prone, motivating automation through deep
learning techniques. However, training reliable deep learning models requires large amounts of high-quality annotated data, which is difficult and time-consuming to produce manually. Consequently, existing cell-counting datasets are often limited, frequently containing fewer than 500 images. In this work, we introduce a large-scale annotated dataset comprising 3,023 images from immunocytochemistry experiments related to cellular differentiation, containing over 430,000 manually annotated cell locations. The dataset presents significant challenges: high cell density, overlapping and morphologically diverse cells, a long-tailed distribution of cell count per image, and variation in staining protocols. We benchmark three categories of existing methods: regression-based, crowd-counting, and cell-counting techniques on a test set with cell counts ranging from 10 to 2,126 cells per image. We also evaluate how the Segment Anything Model (SAM) can be adapted for microscopy cell counting using only dot-annotated datasets. As a case study, we implement a density-map-based adaptation of SAM (SAM-Counter) and report a mean absolute error (MAE) of 22.12, which outperforms
existing approaches (second-best MAE of 27.46). Our results underscore the value of the dataset and the benchmarking framework for driving progress in automated cell counting and provide a robust foundation for future research and development.</dc:description><dc:publisher>IEEE</dc:publisher><dc:date>2025-11-12</dc:date><dc:nsf_par_id>10678709</dc:nsf_par_id><dc:journal_name/><dc:journal_volume/><dc:journal_issue/><dc:page_range_or_elocation>613 to 622</dc:page_range_or_elocation><dc:issn/><dc:isbn>979-8-3315-9599-9</dc:isbn><dc:doi>https://doi.org/10.1109/ICDM65498.2025.00069</dc:doi><dcq:identifierAwardId>2152117</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>