skip to main content


Search for: All records

Creators/Authors contains: "Tyson, J. Anthony"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    We examine the simple model put forth in a recent note by Loeb regarding the brightness of space debris in the size range of 1–10 cm and their impact on the Rubin Observatory Legacy Survey of Space and Time (LSST) transient object searches. Their main conclusion was that “image contamination by untracked space debris might pose a bigger challenge [than large commercial satellite constellations in Low-Earth orbit].” Following corrections and improvements to this model, we calculate the apparent brightness of tumbling low-Earth orbit (LEO) debris of various sizes, and we briefly discuss the likely impact and potential mitigations of glints from space debris in LSST. We find the majority of the difference in predicted signal-to-noise ratio (S/N), about a factor of 6, arises from the defocus of LEO objects due to the large Simonyi Survey Telescope primary mirror and finite range of the debris. The largest change from the Loeb estimates is that 1–10 cm debris in LEO pose no threat to LSST transient object alert generation because their S/N for detection will be much lower than estimated by Loeb due to defocus. We find that only tumbling LEO debris larger than 10 cm or with significantly greater reflectivity, which give 1 ms glints, might be detected with high confidence (S/N > 5). We estimate that only one in five LSST exposures low on the sky during twilight might be affected. More slowly tumbling objects of larger size can give flares in brightness that are easily detected; however, these will not be cataloged by the LSST Science Pipelines because of the resulting long streak.

     
    more » « less
  2. Abstract

    The apparent brightness of satellites is calculated as a function of satellite position as seen by a ground-based observer in darkness. Both direct illumination of the satellite by the Sun as well as indirect illumination due to reflection from the Earth are included. The reflecting properties of the satellite components and of the Earth must first be estimated (the Bidirectional Reflectance Distribution Function, or BRDF). The reflecting properties of the satellite components can be found directly using lab measurements or accurately inferred from multiple observations of a satellite at various solar angles. Integrating over all scattering surfaces leads to the angular pattern of flux from the satellite. Finally, the apparent brightness of the satellite as seen by an observer at a given location is calculated as a function of satellite position. We develop an improved model for reflection of light from Earth’s surface using aircraft data. We find that indirectly reflected light from Earth’s surface contributes significant increases in apparent satellite brightness. This effect is particularly strong during civil twilight. We validate our approach by comparing our calculations to multiple observations of selected Starlink satellites and show significant improvement on previous satellite brightness models. Similar methodology for predicting satellite brightness has already informed mitigation strategies for next-generation Starlink satellites. Measurements of satellite brightness over a variety of solar angles widens the effectiveness of our approach to virtually all satellites. We demonstrate that an empirical model in which reflecting functions of the chassis and the solar panels are fit to observed satellite data performs very well. This work finds application in satellite design and operations, and in planning observatory data acquisition and analysis.

     
    more » « less
  3. ABSTRACT

    The Vera C. Rubin Observatory Wide-Fast Deep sky survey will reach unprecedented surface brightness depths over tens of thousands of square degrees. Surface brightness photometry has traditionally been a challenge. Current algorithms which combine object detection with sky estimation systematically oversubtract the sky, biasing surface brightness measurements at the faint end and destroying or severely compromising low surface brightness light. While it has recently been shown that properly accounting for undetected faint galaxies and the wings of brighter objects can in principle recover a more accurate sky estimate, this has not yet been demonstrated in practice. Obtaining a consistent spatially smooth underlying sky estimate is particularly challenging in the presence of representative distributions of bright and faint objects. In this paper, we use simulations of crowded and uncrowded fields designed to mimic Hyper Suprime-Cam data to perform a series of tests on the accuracy of the recovered sky. Dependence on field density, galaxy type, and limiting flux for detection are all considered. Several photometry packages are utilized: source extractor, gnuastro, and the LSST science pipelines. Each is configured in various modes, and their performance at extreme low surface brightness analysed. We find that the combination of the source extractor software package with novel source model masking techniques consistently produce extremely faint output sky estimates, by up to an order of magnitude, as well as returning high fidelity output science catalogues.

     
    more » « less
  4. ABSTRACT

    Upcoming deep imaging surveys such as the Vera C. Rubin Observatory Legacy Survey of Space and Time will be confronted with challenges that come with increased depth. One of the leading systematic errors in deep surveys is the blending of objects due to higher surface density in the more crowded images; a considerable fraction of the galaxies which we hope to use for cosmology analyses will overlap each other on the observed sky. In order to investigate these challenges, we emulate blending in a mock catalogue consisting of galaxies at a depth equivalent to 1.3 yr of the full 10-yr Rubin Observatory that includes effects due to weak lensing, ground-based seeing, and the uncertainties due to extraction of catalogues from imaging data. The emulated catalogue indicates that approximately 12 per cent of the observed galaxies are ‘unrecognized’ blends that contain two or more objects but are detected as one. Using the positions and shears of half a billion distant galaxies, we compute shear–shear correlation functions after selecting tomographic samples in terms of both spectroscopic and photometric redshift bins. We examine the sensitivity of the cosmological parameter estimation to unrecognized blending employing both jackknife and analytical Gaussian covariance estimators. An ∼0.025 decrease in the derived structure growth parameter S8 = σ8(Ωm/0.3)0.5 is seen due to unrecognized blending in both tomographies with a slight additional bias for the photo-z-based tomography. This bias is greater than the 2σ statistical error in measuring S8.

     
    more » « less
  5. null (Ed.)
  6. null (Ed.)