skip to main content

Search for: All records

Creators/Authors contains: "Bull, Philip"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. ABSTRACT Most efforts to detect signatures of dynamical dark energy (DE) are focused on late times, z ≲ 2, where the DE component begins to dominate the cosmic energy density. Many theoretical models involving dynamical DE exhibit a ‘freezing’ equation of state however, where w → −1 at late times, with a transition to a ‘tracking’ behaviour at earlier times (with w ≫ −1 at sufficiently high redshift). In this paper, we study whether constraints on background distance indicators from large-scale structure (LSS) surveys in the post-reionization matter-dominated regime, 2 ≲ z ≲ 6, are sensitive to this behaviour, on the basis that the DE component should remain detectable (despite being strongly subdominant) in this redshift range given sufficiently precise observations. Using phenomenological models inspired by parameter space studies of Horndeski (generalized scalar-tensor) theories, we show how existing CMB and LSS measurements constrain the DE equation of state in the matter-dominated era, and examine how forthcoming galaxy surveys and 21 cm intensity mapping instruments can improve constraints in this regime at the background level. We also find that the combination of existing CMB and LSS constraints with DESI will already come close to offering the best possible constraints on H0 usingmore »BAO/galaxy power spectrum measurements, and that either a spectroscopic follow-up of the LSST galaxy sample (e.g. MegaMapper or SpecTel) or a Stage 2/PUMA-like intensity mapping survey, both at z ≳ 2, would offer better constraints on the class of models considered here than a comparable cosmic variance-limited galaxy survey at z ≲ 1.5.« less

    Combining the visibilities measured by an interferometer to form a cosmological power spectrum is a complicated process. In a delay-based analysis, the mapping between instrumental and cosmological space is not a one-to-one relation. Instead, neighbouring modes contribute to the power measured at one point, with their respective contributions encoded in the window functions. To better understand the power measured by an interferometer, we assess the impact of instrument characteristics and analysis choices on these window functions. Focusing on the Hydrogen Epoch of Reionization Array (HERA) as a case study, we find that long-baseline observations correspond to enhanced low-k tails of the window functions, which facilitate foreground leakage, whilst an informed choice of bandwidth and frequency taper can reduce said tails. With simple test cases and realistic simulations, we show that, apart from tracing mode mixing, the window functions help accurately reconstruct the power spectrum estimator of simulated visibilities. The window functions depend strongly on the beam chromaticity and less on its spatial structure – a Gaussian approximation, ignoring side lobes, is sufficient. Finally, we investigate the potential of asymmetric window functions, down-weighting the contribution of low-k power to avoid foreground leakage. The window functions presented here correspond to themore »latest HERA upper limits for the full Phase I data. They allow an accurate reconstruction of the power spectrum measured by the instrument and will be used in future analyses to confront theoretical models and data directly in cylindrical space.

    « less
  3. Abstract

    We report the most sensitive upper limits to date on the 21 cm epoch of reionization power spectrum using 94 nights of observing with Phase I of the Hydrogen Epoch of Reionization Array (HERA). Using similar analysis techniques as in previously reported limits, we find at 95% confidence that Δ2(k= 0.34hMpc−1) ≤ 457 mK2atz= 7.9 and that Δ2(k= 0.36hMpc−1) ≤ 3496 mK2atz= 10.4, an improvement by a factor of 2.1 and 2.6, respectively. These limits are mostly consistent with thermal noise over a wide range ofkafter our data quality cuts, despite performing a relatively conservative analysis designed to minimize signal loss. Our results are validated with both statistical tests on the data and end-to-end pipeline simulations. We also report updated constraints on the astrophysics of reionization and the cosmic dawn. Using multiple independent modeling and inference techniques previously employed by HERA Collaboration, we find that the intergalactic medium must have been heated above the adiabatic cooling limit at least as early asz= 10.4, ruling out a broad set of so-called “cold reionization” scenarios. If this heating is due to high-mass X-ray binaries during the cosmic dawn, as is generally believed, our result’s 99% credible interval excludes the local relationshipmore »between soft X-ray luminosity and star formation and thus requires heating driven by evolved low-metallicity stars.

    « less

    We present a Bayesian jackknife test for assessing the probability that a data set contains biased subsets, and, if so, which of the subsets are likely to be biased. The test can be used to assess the presence and likely source of statistical tension between different measurements of the same quantities in an automated manner. Under certain broadly applicable assumptions, the test is analytically tractable. We also provide an open-source code, chiborg, that performs both analytic and numerical computations of the test on general Gaussian-distributed data. After exploring the information theoretical aspects of the test and its performance with an array of simulations, we apply it to data from the Hydrogen Epoch of Reionization Array (HERA) to assess whether different sub-seasons of observing can justifiably be combined to produce a deeper 21 cm power spectrum upper limit. We find that, with a handful of exceptions, the HERA data in question are statistically consistent and this decision is justified. We conclude by pointing out the wide applicability of this test, including to CMB experiments and the H0 tension.

  5. Abstract We introduce DAYENU, a linear, spectral filter for HI intensity mapping that achieves the desirable foreground mitigation and error minimization properties of inverse co-variance weighting with minimal modeling of the underlying data. Beyond 21 cm power-spectrum estimation, our filter is suitable for any analysis where high dynamic-range removal of spectrally smooth foregrounds in irregularly (or regularly) sampled data is required, something required by many other intensity mapping techniques. Our filtering matrix is diagonalized by Discrete Prolate Spheroidal Sequences which are an optimal basis to model band-limited foregrounds in 21 cm intensity mapping experiments in the sense that they maximally concentrate power within a finite region of Fourier space. We show that DAYENU enables the access of large-scale line-of-sight modes that are inaccessible to tapered DFT estimators. Since these modes have the largest SNRs, DAYENU significantly increases the sensitivity of 21 cm analyses over tapered Fourier transforms. Slight modifications allow us to use DAYENU as a linear replacement for iterative delay CLEANing (DAYENUREST). We refer readers to the Code section at the end of this paper for links to examples and code.
  6. Abstract

    Motivated by the desire for wide-field images with well-defined statistical properties for 21 cm cosmology, we implement an optimal mapping pipeline that computes a maximum likelihood estimator for the sky using the interferometric measurement equation. We demonstrate this “direct optimal mapping” with data from the Hydrogen Epoch of Reionization (HERA) Phase I observations. After validating the pipeline with simulated data, we develop a maximum likelihood figure-of-merit for comparing four sky models at 166 MHz with a bandwidth of 100 kHz. The HERA data agree with the GLEAM catalogs to < 10%. After subtracting the GLEAM point sources, the HERA data discriminate between the different continuum sky models, providing most support for the model of Byrne et al. We report the computation cost for mapping the HERA Phase I data and project the computation for the HERA 320-antenna data; both are feasible with a modern server. The algorithm is broadly applicable to other interferometers and is valid for wide-field and noncoplanar arrays.

  7. Abstract We report upper limits on the Epoch of Reionization 21 cm power spectrum at redshifts 7.9 and 10.4 with 18 nights of data (∼36 hr of integration) from Phase I of the Hydrogen Epoch of Reionization Array (HERA). The Phase I data show evidence for systematics that can be largely suppressed with systematic models down to a dynamic range of ∼10 9 with respect to the peak foreground power. This yields a 95% confidence upper limit on the 21 cm power spectrum of Δ 21 2 ≤ ( 30.76 ) 2 mK 2 at k = 0.192 h Mpc −1 at z = 7.9, and also Δ 21 2 ≤ ( 95.74 ) 2 mK 2 at k = 0.256 h Mpc −1 at z = 10.4. At z = 7.9, these limits are the most sensitive to date by over an order of magnitude. While we find evidence for residual systematics at low line-of-sight Fourier k ∥ modes, at high k ∥ modes we find our data to be largely consistent with thermal noise, an indicator that the system could benefit from deeper integrations. The observed systematics could be due to radio frequency interference, cable subreflections, or residualmore »instrumental cross-coupling, and warrant further study. This analysis emphasizes algorithms that have minimal inherent signal loss, although we do perform a careful accounting in a companion paper of the small forms of loss or bias associated with the pipeline. Overall, these results are a promising first step in the development of a tuned, instrument-specific analysis pipeline for HERA, particularly as Phase II construction is completed en route to reaching the full sensitivity of the experiment.« less
  8. Abstract We describe the validation of the HERA Phase I software pipeline by a series of modular tests, building up to an end-to-end simulation. The philosophy of this approach is to validate the software and algorithms used in the Phase I upper-limit analysis on wholly synthetic data satisfying the assumptions of that analysis, not addressing whether the actual data meet these assumptions. We discuss the organization of this validation approach, the specific modular tests performed, and the construction of the end-to-end simulations. We explicitly discuss the limitations in scope of the current simulation effort. With mock visibility data generated from a known analytic power spectrum and a wide range of realistic instrumental effects and foregrounds, we demonstrate that the current pipeline produces power spectrum estimates that are consistent with known analytic inputs to within thermal noise levels (at the 2 σ level) for k > 0.2 h Mpc −1 for both bands and fields considered. Our input spectrum is intentionally amplified to enable a strong “detection” at k ∼ 0.2 h Mpc −1 —at the level of ∼25 σ —with foregrounds dominating on larger scales and thermal noise dominating at smaller scales. Our pipeline is able to detect this amplifiedmore »input signal after suppressing foregrounds with a dynamic range (foreground to noise ratio) of ≳10 7 . Our validation test suite uncovered several sources of scale-independent signal loss throughout the pipeline, whose amplitude is well-characterized and accounted for in the final estimates. We conclude with a discussion of the steps required for the next round of data analysis.« less