skip to main content


Search for: All records

Award ID contains: 1814370

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. ABSTRACT

    Our Universe is homogeneous and isotropic, and its perturbations obey translation and rotation symmetry. In this work, we develop translation and rotation equivariant normalizing flow (TRENF), a generative normalizing flow (NF) model which explicitly incorporates these symmetries, defining the data likelihood via a sequence of Fourier space-based convolutions and pixel-wise non-linear transforms. TRENF gives direct access to the high dimensional data likelihood p(x|y) as a function of the labels y, such as cosmological parameters. In contrast to traditional analyses based on summary statistics, the NF approach has no loss of information since it preserves the full dimensionality of the data. On Gaussian random fields, the TRENF likelihood agrees well with the analytical expression and saturates the Fisher information content in the labels y. On non-linear cosmological overdensity fields from N-body simulations, TRENF leads to significant improvements in constraining power over the standard power spectrum summary statistic. TRENF is also a generative model of the data, and we show that TRENF samples agree well with the N-body simulations it trained on, and that the inverse mapping of the data agrees well with a Gaussian white noise both visually and on various summary statistics: when this is perfectly achieved the resulting p(x|y) likelihood analysis becomes optimal. Finally, we develop a generalization of this model that can handle effects that break the symmetry of the data, such as the survey mask, which enables likelihood analysis on data without periodic boundaries.

     
    more » « less
  2. ABSTRACT

    We present cosmological parameter constraints based on a joint modelling of galaxy–lensing cross-correlations and galaxy clustering measurements in the SDSS, marginalizing over small-scale modelling uncertainties using mock galaxy catalogues, without explicit modelling of galaxy bias. We show that our modelling method is robust to the impact of different choices for how galaxies occupy dark matter haloes and to the impact of baryonic physics (at the $\sim 2{{\ \rm per\ cent}}$ level in cosmological parameters) and test for the impact of covariance on the likelihood analysis and of the survey window function on the theory computations. Applying our results to the measurements using galaxy samples from BOSS and lensing measurements using shear from SDSS galaxies and CMB lensing from Planck, with conservative scale cuts, we obtain $S_8\equiv \left(\frac{\sigma _8}{0.8228}\right)^{0.8}\left(\frac{\Omega _\mathrm{ m}}{0.307}\right)^{0.6}=0.85\pm 0.05$ (stat.) using LOWZ × SDSS galaxy lensing, and S8 = 0.91 ± 0.1 (stat.) using combination of LOWZ and CMASS × Planck CMB lensing. We estimate the systematic uncertainty in the galaxy–galaxy lensing measurements to be $\sim 6{{\ \rm per\ cent}}$ (dominated by photometric redshift uncertainties) and in the galaxy–CMB lensing measurements to be $\sim 3{{\ \rm per\ cent}}$, from small-scale modelling uncertainties including baryonic physics.

     
    more » « less
  3. The goal of generative models is to learn the intricate relations between the data to create new simulated data, but current approaches fail in very high dimensions. When the true data-generating process is based on physical processes, these impose symmetries and constraints, and the generative model can be created by learning an effective description of the underlying physics, which enables scaling of the generative model to very high dimensions. In this work, we propose Lagrangian deep learning (LDL) for this purpose, applying it to learn outputs of cosmological hydrodynamical simulations. The model uses layers of Lagrangian displacements of particles describing the observables to learn the effective physical laws. The displacements are modeled as the gradient of an effective potential, which explicitly satisfies the translational and rotational invariance. The total number of learned parameters is only of order 10, and they can be viewed as effective theory parameters. We combine N-body solver fast particle mesh (FastPM) with LDL and apply it to a wide range of cosmological outputs, from the dark matter to the stellar maps, gas density, and temperature. The computational cost of LDL is nearly four orders of magnitude lower than that of the full hydrodynamical simulations, yet it outperforms them at the same resolution. We achieve this with only of order 10 layers from the initial conditions to the final output, in contrast to typical cosmological simulations with thousands of time steps. This opens up the possibility of analyzing cosmological observations entirely within this framework, without the need for large dark-matter simulations.

     
    more » « less