skip to main content


Search for: All records

Award ID contains: 1725729

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. ABSTRACT

    The next generation of wide-field deep astronomical surveys will deliver unprecedented amounts of images through the 2020s and beyond. As both the sensitivity and depth of observations increase, more blended sources will be detected. This reality can lead to measurement biases that contaminate key astronomical inferences. We implement new deep learning models available through Facebook AI Research’s detectron2 repository to perform the simultaneous tasks of object identification, deblending, and classification on large multiband co-adds from the Hyper Suprime-Cam (HSC). We use existing detection/deblending codes and classification methods to train a suite of deep neural networks, including state-of-the-art transformers. Once trained, we find that transformers outperform traditional convolutional neural networks and are more robust to different contrast scalings. Transformers are able to detect and deblend objects closely matching the ground truth, achieving a median bounding box Intersection over Union of 0.99. Using high-quality class labels from the Hubble Space Telescope, we find that when classifying objects as either stars or galaxies, the best-performing networks can classify galaxies with near 100 per cent completeness and purity across the whole test sample and classify stars above 60 per cent completeness and 80 per cent purity out to HSC i-band magnitudes of 25 mag. This framework can be extended to other upcoming deep surveys such as the Legacy Survey of Space and Time and those with the Roman Space Telescope to enable fast source detection and measurement. Our code, deepdisc, is publicly available at https://github.com/grantmerz/deepdisc.

     
    more » « less
  2. Abstract

    Due to its specificity, fluorescence microscopy has become a quintessential imaging tool in cell biology. However, photobleaching, phototoxicity, and related artifacts continue to limit fluorescence microscopy’s utility. Recently, it has been shown that artificial intelligence (AI) can transform one form of contrast into another. We present phase imaging with computational specificity (PICS), a combination of quantitative phase imaging and AI, which provides information about unlabeled live cells with high specificity. Our imaging system allows for automatic training, while inference is built into the acquisition software and runs in real-time. Applying the computed fluorescence maps back to the quantitative phase imaging (QPI) data, we measured the growth of both nuclei and cytoplasm independently, over many days, without loss of viability. Using a QPI method that suppresses multiple scattering, we measured the dry mass content of individual cell nuclei within spheroids. In its current implementation, PICS offers a versatile quantitative technique for continuous simultaneous monitoring of individual cellular components in biological applications where long-term label-free imaging is desirable.

     
    more » « less
  3. Abstract

    Significant investments to upgrade and construct large-scale scientific facilities demand commensurate investments in R&D to design algorithms and computing approaches to enable scientific and engineering breakthroughs in the big data era. Innovative Artificial Intelligence (AI) applications have powered transformational solutions for big data challenges in industry and technology that now drive a multi-billion dollar industry, and which play an ever increasing role shaping human social patterns. As AI continues to evolve into a computing paradigm endowed with statistical and mathematical rigor, it has become apparent that single-GPU solutions for training, validation, and testing are no longer sufficient for computational grand challenges brought about by scientific facilities that produce data at a rate and volume that outstrip the computing capabilities of available cyberinfrastructure platforms. This realization has been driving the confluence of AI and high performance computing (HPC) to reduce time-to-insight, and to enable a systematic study of domain-inspired AI architectures and optimization schemes to enable data-driven discovery. In this article we present a summary of recent developments in this field, and describe specific advances that authors in this article are spearheading to accelerate and streamline the use of HPC platforms to design and apply accelerated AI algorithms in academia and industry.

     
    more » « less
  4. Free, publicly-accessible full text available June 1, 2024
  5. Išgum, Ivana ; Colliot, Olivier (Ed.)
  6. Abstract Accurate and (near) real-time earthquake monitoring provides the spatial and temporal behaviors of earthquakes for understanding the nature of earthquakes, and also helps in regional seismic hazard assessments and mitigations. Because of the increase in both the quality and quantity of seismic data, an automated earthquake monitoring system is needed. Most of the traditional methods for detecting earthquake signals and picking phases are based on analyses of features in recordings of an individual earthquake and/or their differences from background noises. When seismicity is high, the seismograms are complicated, and, therefore, traditional analysis methods often fail. With the development of machine learning algorithms, earthquake signal detection and seismic phase picking can be more accurate using the features obtained from a large amount of earthquake recordings. We have developed an attention recurrent residual U-Net algorithm, and used data augmentation techniques to improve the accuracy of earthquake detection and seismic phase picking on complex seismograms that record multiple earthquakes. The use of probability functions of P and S arrivals and potential P and S arrival pairs of earthquakes can increase the computational efficiency and accuracy of backprojection for earthquake monitoring in large areas. We applied our workflow to monitor the earthquake activity in southern California during the 2019 Ridgecrest sequence. The distribution of earthquakes determined by our method is consistent with that in the Southern California Earthquake Data Center (SCEDC) catalog. In addition, the number of earthquakes in our catalog is more than three times that of the SCEDC catalog. Our method identifies additional earthquakes that are close in origin times and/or locations, and are not included in the SCEDC catalog. Our algorithm avoids misidentification of seismic phases for earthquake location. In general, our algorithm can provide reliable earthquake monitoring on a large area, even during a high seismicity period. 
    more » « less