Abstract Measuring one-point statistics in redshifted 21 cm intensity maps offers an opportunity to explore non-Gaussian features of the early Universe. We assess the impact of instrumental effects on measurements made with the Hydrogen Epoch of Reionization Array (HERA) by forward modeling observational and simulation data. Using HERA Phase I observations over 94 nights, we examine the second (m2, variance) and third (m3) moments of images. We employ theDAYENU-filtering method for foreground removal and reduce simulated foreground residuals to 10% of the 21 cm signal residuals. In noiseless cosmological simulations, the amplitudes of one-point statistics measurements are significantly reduced by the instrument response and further reduced by wedge-filtering. Analyses with wedge-filtered observational data, along with expected noise simulations, show that systematics alter the probability distribution of the map pixels. A likelihood analysis based on the observational data showsm2measurements disfavor the cold reionization model characterized by inefficient X-ray heating, in line with other power spectra measurements. Small signals inm3due to the instrument response of the Phase I observation and wedge-filtering make it challenging to use these non-Gaussian statistics to explore model parameters. Forecasts with the full HERA array predict high signal-to-noise ratios form2,m3, andS3assuming no foregrounds, but wedge-filtering drastically reduces these ratios. This work demonstrates conclusively that a comprehensive understanding of instrumental effects onm2andm3is essential for their use as a cosmological probe, given their dependence on the underlying model.
more »
« less
Discrete Time Signal Localization Accuracy in Gaussian Noise at Low Signal to Noise Ratios
Convolution and matched filtering are often used to detect a known signal in the presence of noise. The probability of detection and probability of missed detection are well known and widely used statistics. Oftentimes we are not only interested in the probability of detecting a signal but also accurately estimating when the signal occurred and the error statistics associated with that time measurement. Accurately representing the timing error is important for geolocation schemes, such as Time of Arrival (TOA) and Time Difference of Arrival (TDOA), as well as other applications. The Cramér Rao Lower Bound (CRLB) and other, tighter, bounds have been calculated for the error variance on Time of Arrival estimators. However, achieving these bounds requires an amount of interpolation be performed on the signal of interest that may be greater than computational constraints allow. Furthermore, at low Signal to Noise Ratios (SNRs), the probability distribution for the error is non-Gaussian and depends on the shape of the signal of interest. In this paper we introduce a method of characterizing the localization accuracy of the matched filtering operation when used to detect a discrete signal in Additive White Gaussian Noise (AWGN) without additional interpolation. The actual localization accuracy depends on the shape of the signal that is being detected. We develop a statistical method for analyzing the localization error probability mass function for arbitrary signal shapes at any SNR. Finally, we use our proposed analysis method to calculate the error probability mass functions for a few signals commonly used in detection scenarios. These illustrative results serve as examples of the kinds of statistical results that can be generated using our proposed method. The illustrative results demonstrate our method’s unique ability to calculate the non-Gaussian, and signal shape dependent, error distribution at low Signal to Noise Ratios. The error variance calculated using the proposed method is shown to closely track simulation results, deviating from the Cramér Rao Lower Bound at low Signal to Noise Ratios.
more »
« less
- Award ID(s):
- 2244365
- PAR ID:
- 10536992
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- IEEE Access
- Volume:
- 11
- ISSN:
- 2169-3536
- Page Range / eLocation ID:
- 109595 to 109602
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract: Impairments such as time varying phase noise (PHN) and carrier frequency offset (CFO) result in loss of synchronization and poor performance of multi-relay communication systems. Joint estimation of these impairments is necessary in order to correctly decode the received signal at the destination. In this paper, we address spectrally efficient multi-relay transmission scenarios where all the relays simultaneously communicate with the destination. We propose an iterative pilot-aided algorithm based on the expectation conditional maximization for joint estimation of multipath channels, Wiener PHNs, and CFOs in decode-and-forward-based multi-relay orthogonal frequency division multiplexing systems. Next, a new expression of the hybrid Cramér-Rao lower bound (HCRB) for the multi-parameter estimation problem is derived. Finally, an iterative receiver based on an extended Kalman filter for joint data detection and PHN tracking is employed. Numerical results show that the proposed estimator outperforms existing algorithms and its mean square error performance is close to the derived HCRB at different signal-to-noise ratios for different PHN variances. In addition, the combined estimation algorithm and the iterative receiver can significantly improve average bit-error rate (BER) performance compared with existing algorithms. In addition, the BER performance of the proposed system is close to the ideal case of perfect channel impulse responses, PHNs, and CFOs estimation.more » « less
-
Modulating the polarization of excitation light, resolving the polarization of emitted fluorescence, and point spread function (PSF) engineering have been widely leveraged for measuring the orientation of single molecules. Typically, the performance of these techniques is optimized and quantified using the Cramér-Rao bound (CRB), which describes the best possible measurement variance of an unbiased estimator. However, CRB is a local measure and requires exhaustive sampling across the measurement space to fully characterize measurement precision. We develop a global variance upper bound (VUB) for fast quantification and comparison of orientation measurement techniques. Our VUB tightly bounds the diagonal elements of the CRB matrix from above; VUB overestimates the mean CRB by ~34%. However, compared to directly calculating the mean CRB over orientation space, we are able to calculate VUB ~1000 times faster.more » « less
-
The two-point-source resolution criterion is widely used to quantify the performance of imaging systems. The two main approaches for the computation of the two-point-source resolution are the detection theoretic and visual analyses. The first assumes a shift-invariant system and lacks the ability to incorporate two different point spread functions (PSFs), which may be required in certain situations like computing axial resolution. The latter approach, which includes the Rayleigh criterion, relies on the peak-to-valley ratio and does not properly account for the presence of noise. We present a heuristic generalization of the visual two-point-source resolution criterion using Gaussian processes (GP). This heuristic criterion is applicable to both shift-invariant and shift-variant imaging modalities. This criterion can also incorporate different definitions of resolution expressed in terms of varying peak-to-valley ratios. Our approach implicitly incorporates information about noise statistics such as the variance or signal-to-noise ratio by making assumptions about the spatial correlation of PSFs in the form of kernel functions. Also, it does not rely on an analytic form of the PSF.more » « less
-
Zero-noise extrapolation (ZNE) is a widely used quantum error mitigation technique that artificially amplifies circuit noise and then extrapolates the results to the noise-free circuit. A common ZNE approach is Richardson extrapolation, which relies on polynomial interpolation. Despite its simplicity, efficient implementations of Richardson extrapolation face several challenges, including approximation errors from the non-polynomial behavior of noise channels, overfitting due to polynomial interpolation, and exponentially amplified measurement noise. This paper provides a comprehensive analysis of these challenges, presenting bias and variance bounds that quantify approximation errors. Additionally, for any precision , our results offer an estimate of the necessary sample complexity. We further extend the analysis to polynomial least squares-based extrapolation, which mitigates measurement noise and avoids overfitting. Finally, we propose a strategy for simultaneously mitigating circuit and algorithmic errors in the Trotter-Suzuki algorithm by jointly scaling the time step size and the noise level. This strategy provides a practical tool to enhance the reliability of near-term quantum computations. We support our theoretical findings with numerical experiments.more » « less
An official website of the United States government

