skip to main content


The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 10:00 PM ET on Friday, December 8 until 2:00 AM ET on Saturday, December 9 due to maintenance. We apologize for the inconvenience.

Search for: All records

Creators/Authors contains: "De, S."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available December 31, 2023
  2. Nuclei segmentation is a fundamental task in histopathological image analysis. Typically, such segmentation tasks require significant effort to manually generate pixel-wise annotations for fully supervised training. To alleviate the manual effort, in this paper we propose a novel approach using points only annotation. Two types of coarse labels with complementary information are derived from the points annotation, and are then utilized to train a deep neural network. The fully- connected conditional random field loss is utilized to further refine the model without introducing extra computational complexity during inference. Experimental results on two nuclei segmentation datasets reveal that the proposed method is able to achieve competitive performance compared to the fully supervised counterpart and the state-of-the-art methods while requiring significantly less annotation effort. Our code is publicly available. 
    more » « less
  3. Nuclei segmentation and classification are two important tasks in the histopathology image analysis, because the mor- phological features of nuclei and spatial distributions of dif- ferent types of nuclei are highly related to cancer diagnosis and prognosis. Existing methods handle the two problems independently, which are not able to obtain the features and spatial heterogeneity of different types of nuclei at the same time. In this paper, we propose a novel deep learning based method which solves both tasks in a unified framework. It can segment individual nuclei and classify them into tumor, lymphocyte and stroma nuclei. Perceptual loss is utilized to enhance the segmentation of details. We also take advantages of transfer learning to promote the training of deep neural net- works on a relatively small lung cancer dataset. Experimental results prove the effectiveness of the proposed method. The code is publicly available 
    more » « less
  4. Abstract

    A study of multiplicity and pseudorapidity distributions of inclusive photons measured in pp and p–Pb collisions at a center-of-mass energy per nucleon–nucleon collision of$$\sqrt{s_{\textrm{NN}}}~=~5.02$$sNN=5.02 TeV using the ALICE detector in the forward pseudorapidity region 2.3 $$<~\eta _\textrm{lab} ~<$$<ηlab< 3.9 is presented. Measurements in p–Pb collisions are reported for two beam configurations in which the directions of the proton and lead ion beam were reversed. The pseudorapidity distributions in p–Pb collisions are obtained for seven centrality classes which are defined based on different event activity estimators, i.e., the charged-particle multiplicity measured at midrapidity as well as the energy deposited in a calorimeter at beam rapidity. The inclusive photon multiplicity distributions for both pp and p–Pb collisions are described by double negative binomial distributions. The pseudorapidity distributions of inclusive photons are compared to those of charged particles at midrapidity in pp collisions and for different centrality classes in p–Pb collisions. The results are compared to predictions from various Monte Carlo event generators. None of the generators considered in this paper reproduces the inclusive photon multiplicity distributions in the reported multiplicity range. The pseudorapidity distributions are, however, better described by the same generators.

    more » « less
    Free, publicly-accessible full text available July 1, 2024
  5. Abstract The Pandora Software Development Kit and algorithm libraries provide pattern-recognition logic essential to the reconstruction of particle interactions in liquid argon time projection chamber detectors. Pandora is the primary event reconstruction software used at ProtoDUNE-SP, a prototype for the Deep Underground Neutrino Experiment far detector. ProtoDUNE-SP, located at CERN, is exposed to a charged-particle test beam. This paper gives an overview of the Pandora reconstruction algorithms and how they have been tailored for use at ProtoDUNE-SP. In complex events with numerous cosmic-ray and beam background particles, the simulated reconstruction and identification efficiency for triggered test-beam particles is above 80% for the majority of particle type and beam momentum combinations. Specifically, simulated 1 GeV/ c charged pions and protons are correctly reconstructed and identified with efficiencies of 86.1 $$\pm 0.6$$ ± 0.6 % and 84.1 $$\pm 0.6$$ ± 0.6 %, respectively. The efficiencies measured for test-beam data are shown to be within 5% of those predicted by the simulation. 
    more » « less
    Free, publicly-accessible full text available July 1, 2024
  6. Free, publicly-accessible full text available June 1, 2024
  7. Free, publicly-accessible full text available May 1, 2024
  8. Abstract The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype. 
    more » « less
    Free, publicly-accessible full text available April 1, 2024