skip to main content


Search for: All records

Creators/Authors contains: "Hartley, W. G."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. ABSTRACT

    In this work, we explore the possibility of applying machine learning methods designed for 1D problems to the task of galaxy image classification. The algorithms used for image classification typically rely on multiple costly steps, such as the point spread function deconvolution and the training and application of complex Convolutional Neural Networks of thousands or even millions of parameters. In our approach, we extract features from the galaxy images by analysing the elliptical isophotes in their light distribution and collect the information in a sequence. The sequences obtained with this method present definite features allowing a direct distinction between galaxy types. Then, we train and classify the sequences with machine learning algorithms, designed through the platform Modulos AutoML. As a demonstration of this method, we use the second public release of the Dark Energy Survey (DES DR2). We show that we are able to successfully distinguish between early-type and late-type galaxies, for images with signal-to-noise ratio greater than 300. This yields an accuracy of $86{{\ \rm per\ cent}}$ for the early-type galaxies and $93{{\ \rm per\ cent}}$ for the late-type galaxies, which is on par with most contemporary automated image classification approaches. The data dimensionality reduction of our novel method implies a significant lowering in computational cost of classification. In the perspective of future data sets obtained with e.g. Euclid and the Vera Rubin Observatory, this work represents a path towards using a well-tested and widely used platform from industry in efficiently tackling galaxy classification problems at the peta-byte scale.

     
    more » « less
  2. ABSTRACT

    Widefield surveys probe clustered scalar fields – such as galaxy counts, lensing potential, etc. – which are sensitive to different cosmological and astrophysical processes. Constraining such processes depends on the statistics that summarize the field. We explore the cumulative distribution function (CDF) as a summary of the galaxy lensing convergence field. Using a suite of N-body light-cone simulations, we show the CDFs’ constraining power is modestly better than the second and third moments, as CDFs approximately capture information from all moments. We study the practical aspects of applying CDFs to data, using the Dark Energy Survey (DES Y3) data as an example, and compute the impact of different systematics on the CDFs. The contributions from the point spread function and reduced shear approximation are $\lesssim 1~{{\ \rm per\ cent}}$ of the total signal. Source clustering effects and baryon imprints contribute 1–10 per cent. Enforcing scale cuts to limit systematics-driven biases in parameter constraints degrade these constraints a noticeable amount, and this degradation is similar for the CDFs and the moments. We detect correlations between the observed convergence field and the shape noise field at 13σ. The non-Gaussian correlations in the noise field must be modelled accurately to use the CDFs, or other statistics sensitive to all moments, as a rigorous cosmology tool.

     
    more » « less
  3. ABSTRACT

    We present an alternative calibration of the MagLim lens sample redshift distributions from the Dark Energy Survey (DES) first 3 yr of data (Y3). The new calibration is based on a combination of a self-organizing-map-based scheme and clustering redshifts to estimate redshift distributions and inherent uncertainties, which is expected to be more accurate than the original DES Y3 redshift calibration of the lens sample. We describe in detail the methodology, and validate it on simulations and discuss the main effects dominating our error budget. The new calibration is in fair agreement with the fiducial DES Y3 n(z) calibration, with only mild differences (<3σ) in the means and widths of the distributions. We study the impact of this new calibration on cosmological constraints, analysing DES Y3 galaxy clustering and galaxy–galaxy lensing measurements, assuming a Lambda cold dark matter cosmology. We obtain Ωm = 0.30 ± 0.04, σ8 = 0.81 ± 0.07, and S8 = 0.81 ± 0.04, which implies a ∼0.4σ shift in the Ω − S8 plane compared to the fiducial DES Y3 results, highlighting the importance of the redshift calibration of the lens sample in multiprobe cosmological analyses.

     
    more » « less
  4. ABSTRACT

    The fiducial cosmological analyses of imaging surveys like DES typically probe the Universe at redshifts z < 1. We present the selection and characterization of high-redshift galaxy samples using DES Year 3 data, and the analysis of their galaxy clustering measurements. In particular, we use galaxies that are fainter than those used in the previous DES Year 3 analyses and a Bayesian redshift scheme to define three tomographic bins with mean redshifts around z ∼ 0.9, 1.2, and 1.5, which extend the redshift coverage of the fiducial DES Year 3 analysis. These samples contain a total of about 9 million galaxies, and their galaxy density is more than 2 times higher than those in the DES Year 3 fiducial case. We characterize the redshift uncertainties of the samples, including the usage of various spectroscopic and high-quality redshift samples, and we develop a machine-learning method to correct for correlations between galaxy density and survey observing conditions. The analysis of galaxy clustering measurements, with a total signal to noise S/N ∼ 70 after scale cuts, yields robust cosmological constraints on a combination of the fraction of matter in the Universe Ωm and the Hubble parameter h, $\Omega _m h = 0.195^{+0.023}_{-0.018}$, and 2–3  per cent measurements of the amplitude of the galaxy clustering signals, probing galaxy bias and the amplitude of matter fluctuations, bσ8. A companion paper (in preparation) will present the cross-correlations of these high-z samples with cosmic microwave background lensing from Planck and South Pole Telescope, and the cosmological analysis of those measurements in combination with the galaxy clustering presented in this work.

     
    more » « less
  5. Free, publicly-accessible full text available October 20, 2024
  6. ABSTRACT

    We study the effect of magnification in the Dark Energy Survey Year 3 analysis of galaxy clustering and galaxy–galaxy lensing, using two different lens samples: a sample of luminous red galaxies, redMaGiC, and a sample with a redshift-dependent magnitude limit, MagLim. We account for the effect of magnification on both the flux and size selection of galaxies, accounting for systematic effects using the Balrog image simulations. We estimate the impact of magnification on the galaxy clustering and galaxy–galaxy lensing cosmology analysis, finding it to be a significant systematic for the MagLim sample. We show cosmological constraints from the galaxy clustering autocorrelation and galaxy–galaxy lensing signal with different magnifications priors, finding broad consistency in cosmological parameters in ΛCDM and wCDM. However, when magnification bias amplitude is allowed to be free, we find the two-point correlation functions prefer a different amplitude to the fiducial input derived from the image simulations. We validate the magnification analysis by comparing the cross-clustering between lens bins with the prediction from the baseline analysis, which uses only the autocorrelation of the lens bins, indicating that systematics other than magnification may be the cause of the discrepancy. We show that adding the cross-clustering between lens redshift bins to the fit significantly improves the constraints on lens magnification parameters and allows uninformative priors to be used on magnification coefficients, without any loss of constraining power or prior volume concerns.

     
    more » « less
  7. ABSTRACT

    We present a method for mapping variations between probability distribution functions and apply this method within the context of measuring galaxy redshift distributions from imaging survey data. This method, which we name PITPZ for the probability integral transformations it relies on, uses a difference in curves between distribution functions in an ensemble as a transformation to apply to another distribution function, thus transferring the variation in the ensemble to the latter distribution function. This procedure is broadly applicable to the problem of uncertainty propagation. In the context of redshift distributions, for example, the uncertainty contribution due to certain effects can be studied effectively only in simulations, thus necessitating a transfer of variation measured in simulations to the redshift distributions measured from data. We illustrate the use of PITPZ by using the method to propagate photometric calibration uncertainty to redshift distributions of the Dark Energy Survey Year 3 weak lensing source galaxies. For this test case, we find that PITPZ yields a lensing amplitude uncertainty estimate due to photometric calibration error within 1 per cent of the truth, compared to as much as a 30 per cent underestimate when using traditional methods.

     
    more » « less
  8. ABSTRACT

    Recent cosmological analyses with large-scale structure and weak lensing measurements, usually referred to as 3 × 2pt, had to discard a lot of signal to noise from small scales due to our inability to accurately model non-linearities and baryonic effects. Galaxy–galaxy lensing, or the position–shear correlation between lens and source galaxies, is one of the three two-point correlation functions that are included in such analyses, usually estimated with the mean tangential shear. However, tangential shear measurements at a given angular scale θ or physical scale R carry information from all scales below that, forcing the scale cuts applied in real data to be significantly larger than the scale at which theoretical uncertainties become problematic. Recently, there have been a few independent efforts that aim to mitigate the non-locality of the galaxy–galaxy lensing signal. Here, we perform a comparison of the different methods, including the Y-transformation, the point-mass marginalization methodology, and the annular differential surface density statistic. We do the comparison at the cosmological constraints level in a combined galaxy clustering and galaxy–galaxy lensing analysis. We find that all the estimators yield equivalent cosmological results assuming a simulated Rubin Observatory Legacy Survey of Space and Time (LSST) Year 1 like set-up and also when applied to DES Y3 data. With the LSST Y1 set-up, we find that the mitigation schemes yield ∼1.3 times more constraining S8 results than applying larger scale cuts without using any mitigation scheme.

     
    more » « less
  9. ABSTRACT

    Cosmological analyses with type Ia supernovae (SNe Ia) often assume a single empirical relation between colour and luminosity (β) and do not account for varying host-galaxy dust properties. However, from studies of dust in large samples of galaxies, it is known that dust attenuation can vary significantly. Here, we take advantage of state-of-the-art modelling of galaxy properties to characterize dust parameters (dust attenuation AV, and a parameter describing the dust law slope RV) for 1100 Dark Energy Survey (DES) SN host galaxies. Utilizing optical and infrared data of the hosts alone, we find three key aspects of host dust that impact SN cosmology: (1) there exists a large range (∼1–6) of host RV; (2) high-stellar mass hosts have RV on average ∼0.7 lower than that of low-mass hosts; (3) for a subsample of 81 spectroscopically classified SNe there is a significant (>3σ) correlation between the Hubble diagram residuals of red SNe Ia and the host RV that when corrected for reduces scatter by $\sim 13{{\ \rm per\ cent}}$ and the significance of the ‘mass step’ to ∼1σ. These represent independent confirmations of recent predictions based on dust that attempted to explain the puzzling ‘mass step’ and intrinsic scatter (σint) in SN Ia analyses.

     
    more » « less
  10. Abstract

    Gravitationally lensed supernovae (LSNe) are important probes of cosmic expansion, but they remain rare and difficult to find. Current cosmic surveys likely contain 5–10 LSNe in total while next-generation experiments are expected to contain several hundred to a few thousand of these systems. We search for these systems in observed Dark Energy Survey (DES) five year SN fields—10 3 sq. deg. regions of sky imaged in thegrizbands approximately every six nights over five years. To perform the search, we utilize the DeepZipper approach: a multi-branch deep learning architecture trained on image-level simulations of LSNe that simultaneously learns spatial and temporal relationships from time series of images. We find that our method obtains an LSN recall of 61.13% and a false-positive rate of 0.02% on the DES SN field data. DeepZipper selected 2245 candidates from a magnitude-limited (mi< 22.5) catalog of 3,459,186 systems. We employ human visual inspection to review systems selected by the network and find three candidate LSNe in the DES SN fields.

     
    more » « less