skip to main content


Search for: All records

Creators/Authors contains: "Connolly, Andrew"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    The Vera C. Rubin Observatory will, over a period of 10 yr, repeatedly survey the southern sky. To ensure that images generated by Rubin meet the quality requirements for precision science, the observatory will use an active-optics system (AOS) to correct for alignment and mirror surface perturbations introduced by gravity and temperature gradients in the optical system. To accomplish this, Rubin will use out-of-focus images from sensors located at the edge of the focal plane to learn and correct for perturbations to the wave front. We have designed and integrated a deep-learning (DL) model for wave-front estimation into the AOS pipeline. In this paper, we compare the performance of this DL approach to Rubin’s baseline algorithm when applied to images from two different simulations of the Rubin optical system. We show the DL approach is faster and more accurate, achieving the atmospheric error floor both for high-quality images and low-quality images with heavy blending and vignetting. Compared to the baseline algorithm, the DL model is 40× faster, the median error 2× better under ideal conditions, 5× better in the presence of vignetting by the Rubin camera, and 14× better in the presence of blending in crowded fields. In addition, the DL model surpasses the required optical quality in simulations of the AOS closed loop. This system promises to increase the survey area useful for precision science by up to 8%. We discuss how this system might be deployed when commissioning and operating Rubin.

     
    more » « less
  2. Abstract

    The DECam Ecliptic Exploration Project (DEEP) is a deep survey of the trans-Neptunian solar system being carried out on the 4 m Blanco telescope at the Cerro Tololo Inter-American Observatory in Chile using the Dark Energy Camera (DECam). By using a shift-and-stack technique to achieve a mean limiting magnitude ofr∼ 26.2, DEEP achieves an unprecedented combination of survey area and depth, enabling quantitative leaps forward in our understanding of the Kuiper Belt populations. This work reports results from an analysis of 20, 3 deg2DECam fields along the invariable plane. We characterize the efficiency and false-positive rates for our moving-object detection pipeline, and use this information to construct a Bayesian signal probability for each detected source. This procedure allows us to treat all of our Kuiper Belt object (KBO) detections statistically, simultaneously accounting for efficiency and false positives. We detect approximately 2300 candidate sources with KBO-like motion with signal-to-noise ratios > 6.5. We use a subset of these objects to compute the luminosity function of the Kuiper Belt as a whole, as well as the cold classical (CC) population. We also investigate the absolute magnitude (H) distribution of the CCs, and find consistency with both an exponentially tapered power law, which is predicted by streaming instability models of planetesimal formation, and a rolling power law. Finally, we provide an updated mass estimate for the CC Kuiper Belt ofMCC(Hr<12)=0.00170.0004+0.0010M, assuming albedop= 0.15 and densityρ= 1 g cm−3.

     
    more » « less
  3. Abstract

    We present a detailed study of the observational biases of the DECam Ecliptic Exploration Project’s B1 data release and survey simulation software that enables direct statistical comparisons between models and our data. We inject a synthetic population of objects into the images, and then subsequently recover them in the same processing as our real detections. This enables us to characterize the survey’s completeness as a function of apparent magnitudes and on-sky rates of motion. We study the statistically optimal functional form for the magnitude, and develop a methodology that can estimate the magnitude and rate efficiencies for all survey’s pointing groups simultaneously. We have determined that our peak completeness is on average 80% in each pointing group, and our magnitude drops to 25% of this value atm25= 26.22. We describe the freely available survey simulation software and its methodology. We conclude by using it to infer that our effective search area for objects at 40 au is 14.8 deg2, and that our lack of dynamically cold distant objects means that there at most 8 × 103objects with 60 <a< 80 au and absolute magnitudesH≤ 8.

     
    more » « less
  4. Abstract

    We present the first set of trans-Neptunian objects (TNOs) observed on multiple nights in data taken from the DECam Ecliptic Exploration Project. Of these 110 TNOs, 105 do not coincide with previously known TNOs and appear to be new discoveries. Each individual detection for our objects resulted from a digital tracking search at TNO rates of motion, using two-to-four-hour exposure sets, and the detections were subsequently linked across multiple observing seasons. This procedure allows us to find objects with magnitudesmVR≈ 26. The object discovery processing also included a comprehensive population of objects injected into the images, with a recovery and linking rate of at least 94%. The final orbits were obtained using a specialized orbit-fitting procedure that accounts for the positional errors derived from the digital tracking procedure. Our results include robust orbits and magnitudes for classical TNOs with absolute magnitudesH∼ 10, as well as a dynamically detached object found at 76 au (semimajor axisa≈ 77 au). We find a disagreement between our population of classical TNOs and the CFEPS-L7 three-component model for the Kuiper Belt.

     
    more » « less
  5. Abstract

    We present a scalable, cloud-based science platform solution designed to enable next-to-the-data analyses of terabyte-scale astronomical tabular data sets. The presented platform is built on Amazon Web Services (over Kubernetes and S3 abstraction layers), utilizes Apache Spark and the Astronomy eXtensions for Spark for parallel data analysis and manipulation, and provides the familiar JupyterHub web-accessible front end for user access. We outline the architecture of the analysis platform, provide implementation details and rationale for (and against) technology choices, verify scalability through strong and weak scaling tests, and demonstrate usability through an example science analysis of data from the Zwicky Transient Facility’s 1Bn+ light-curve catalog. Furthermore, we show how this system enables an end user to iteratively build analyses (in Python) that transparently scale processing with no need for end-user interaction. The system is designed to be deployable by astronomers with moderate cloud engineering knowledge, or (ideally) IT groups. Over the past 3 yr, it has been utilized to build science platforms for the DiRAC Institute, the ZTF partnership, the LSST Solar System Science Collaboration, and the LSST Interdisciplinary Network for Collaboration and Computing, as well as for numerous short-term events (with over 100 simultaneous users). In a live demo instance, the deployment scripts, source code, and cost calculators are accessible.4

    http://hub.astronomycommons.org/

     
    more » « less
  6. Abstract Trans-Neptunian objects provide a window into the history of the solar system, but they can be challenging to observe due to their distance from the Sun and relatively low brightness. Here we report the detection of 75 moving objects that we could not link to any other known objects, the faintest of which has a VR magnitude of 25.02 ± 0.93 using the Kernel-Based Moving Object Detection (KBMOD) platform. We recover an additional 24 sources with previously known orbits. We place constraints on the barycentric distance, inclination, and longitude of ascending node of these objects. The unidentified objects have a median barycentric distance of 41.28 au, placing them in the outer solar system. The observed inclination and magnitude distribution of all detected objects is consistent with previously published KBO distributions. We describe extensions to KBMOD, including a robust percentile-based lightcurve filter, an in-line graphics-processing unit filter, new coadded stamp generation, and a convolutional neural network stamp filter, which allow KBMOD to take advantage of difference images. These enhancements mark a significant improvement in the readiness of KBMOD for deployment on future big data surveys such as LSST. 
    more » « less
  7. null (Ed.)
    Abstract How does STEM knowledge learned in school change students’ brains? Using fMRI, we presented photographs of real-world structures to engineering students with classroom-based knowledge and hands-on lab experience, examining how their brain activity differentiated them from their “novice” peers not pursuing engineering degrees. A data-driven MVPA and machine-learning approach revealed that neural response patterns of engineering students were convergent with each other and distinct from novices’ when considering physical forces acting on the structures. Furthermore, informational network analysis demonstrated that the distinct neural response patterns of engineering students reflected relevant concept knowledge: learned categories of mechanical structures. Information about mechanical categories was predominantly represented in bilateral anterior ventral occipitotemporal regions. Importantly, mechanical categories were not explicitly referenced in the experiment, nor does visual similarity between stimuli account for mechanical category distinctions. The results demonstrate how learning abstract STEM concepts in the classroom influences neural representations of objects in the world. 
    more » « less
  8. Deep learning (DL) models have achieved paradigm-changing performance in many fields with high dimensional data, such as images, audio, and text. However, the black-box nature of deep neural networks is not only a barrier to adoption in applications such as medical diagnosis, where interpretability is essential, but it also impedes diagnosis of under performing models. The task of diagnosing or explaining DL models requires the computation of additional artifacts, such as activation values and gradients. These artifacts are large in volume, and their computation, storage, and querying raise significant data management challenges. In this paper, we develop a novel data sampling technique that produces approximate but accurate results for these model debugging queries. Our sampling technique utilizes the lower dimension representation learned by the DL model and focuses on model decision boundaries for the data in this lower dimensional space. 
    more » « less