skip to main content

Title: Grid-based minimization at scale: Feldman-Cousins corrections for light sterile neutrino search
High Energy Physics (HEP) experiments generally employ sophisticated statistical methods to present results in searches of new physics. In the problem of searching for sterile neutrinos, likelihood ratio tests are applied to short-baseline neutrino oscillation experiments to construct confidence intervals for the parameters of interest. The test statistics of the form Δχ 2 is often used to form the confidence intervals, however, this approach can lead to statistical inaccuracies due to the small signal rate in the region-of-interest. In this paper, we present a computational model for the computationally expensive Feldman-Cousins corrections to construct a statistically accurate confidence interval for neutrino oscillation analysis. The program performs a grid-based minimization over oscillation parameters and is written in C++. Our algorithms make use of vectorization through Eigen3, yielding a single-core speed-up of 350 compared to the original implementation, and achieve MPI data parallelism by employing DIY. We demonstrate the strong scaling of the application at High-Performance Computing (HPC) sites. We utilize HDF5 along with HighFive to write the results of the calculation to file.
; ; ; ; ; ; ; ;
Biscarat, C.; Campana, S.; Hegner, B.; Roiser, S.; Rovelli, C.I.; Stewart, G.A.
Award ID(s):
Publication Date:
Journal Name:
EPJ Web of Conferences
Page Range or eLocation-ID:
Sponsoring Org:
National Science Foundation
More Like this
  1. A bstract We evaluate the statistical significance of the 3+1 sterile-neutrino hypothesis using ν e and $$ \overline{\nu} $$ ν ¯ e disappearance data from reactor, solar and gallium radioactive source experiments. Concerning the latter, we investigate the implications of the recent BEST results. For reactor data we focus on relative measurements independent of flux predictions. For the problem at hand, the usual χ 2 -approximation to hypothesis testing based on Wilks’ theorem has been shown in the literature to be inaccurate. We therefore present results based on Monte Carlo simulations, and find that this typically reduces the significance by roughly 1 σ with respect to the naïve expectation. We find no significant indication in favor of sterile-neutrino oscillations from reactor data. On the other hand, gallium data (dominated by the BEST result) show more than 5 σ of evidence supporting the sterile-neutrino hypothesis, favoring oscillation parameters in agreement with constraints from reactor data. This explanation is, however, in significant tension (∼ 3 σ ) with solar neutrino experiments. In order to assess the robustness of the signal for gallium experiments we present a discussion of the impact of cross-section uncertainties on the results.
  2. Abstract

    The futureRicochetexperiment aims at searching for new physics in the electroweak sector by providing a high precision measurement of the Coherent Elastic Neutrino-Nucleus Scattering (CENNS) process down to the sub-100 eV nuclear recoil energy range. The experiment will deploy a kg-scale low-energy-threshold detector array combining Ge and Zn target crystals 8.8 m away from the 58 MW research nuclear reactor core of the Institut Laue Langevin (ILL) in Grenoble, France. Currently, theRicochetCollaboration is characterizing the backgrounds at its future experimental site in order to optimize the experiment’s shielding design. The most threatening background component, which cannot be actively rejected by particle identification, consists of keV-scale neutron-induced nuclear recoils. These initial fast neutrons are generated by the reactor core and surrounding experiments (reactogenics), and by the cosmic rays producing primary neutrons and muon-induced neutrons in the surrounding materials. In this paper, we present theRicochetneutron background characterization using$$^3$$3He proportional counters which exhibit a high sensitivity to thermal, epithermal and fast neutrons. We compare these measurements to theRicochetGeant4 simulations to validate our reactogenic and cosmogenic neutron background estimations. Eventually, we present our estimated neutron background for the futureRicochetexperiment and the resulting CENNS detection significance. Our results show that depending on the effectiveness ofmore »the muon veto, we expect a total nuclear recoil background rate between 44 ± 3 and 9 ± 2 events/day/kg in the CENNS region of interest, i.e. between 50 eV and 1 keV. We therefore found that theRicochetexperiment should reach a statistical significance of 4.6 to 13.6 $$\sigma $$σfor the detection of CENNS after one reactor cycle, when only the limiting neutron background is considered.

    « less
  3. The quantification of strain in three-dimensions is a powerful tool for structural investigations, allowing for the direct consideration of the localization and delocalization of deformation in space, and potentially, in time. Furthermore, characterization of the distribution of strain in three-dimensions may yield information concerning large-scale kinematics that may not be obtained through the traditional use of asymmetric fabrics. In this contribution, we present a streamlined methodology for the calculation of three-dimensional strain using objective approaches that allow for consideration of error assessment. This approach begins with the collection of suitable samples for strain analysis following either the Rf/ϕ or normalized Fry techniques. Samples are cut along three mutually perpendicular orientations using a set of jigs designed for use in a large oil saw. Cut faces are polished and scanned in high resolution. Scanned images are processed following a standard convention. The boundaries of objects are outlined as “Regions Of Interest” in the open-source program ImageJ and saved. A script reads the saved files of object outlines and statistically fits an ellipse to each digitized object. The parameters of fitted objects are then extracted and saved. Two-dimensional strain analyses are completed following the normalized Fry method or the Rf/ϕ technique followingmore »a bootstrap statistical approach. For the normalized Fry method, an objective fitting routine modified from Mulchrone (2013) is used to determine the parameters of the central void. For the Rf/ϕ method, an inverse straining routine is applied and tests the resulting object orientations to a random uniform distribution following a Kolmogorov–Smirnov test in order to obtain the sectional strain ratio and orientation. Bootstrap sampling of Fry coordinates or objects results in a distribution of possible sectional strains that can be sampled for fitting of strain ellipsoids following the technique of Robin (2002). As such, the parameters of three-dimensional strain including Lode parameter and octahedral shear strain can be contoured based on confidence intervals for each sample processed. The application of the objective approach is presented in a corresponding poster.« less
  4. A bstract Our herein described combined analysis of the latest neutrino oscillation data presented at the Neutrino2020 conference shows that previous hints for the neutrino mass ordering have significantly decreased, and normal ordering (NO) is favored only at the 1 . 6 σ level. Combined with the χ 2 map provided by Super-Kamiokande for their atmospheric neutrino data analysis the hint for NO is at 2 . 7 σ . The CP conserving value δ CP = 180° is within 0 . 6 σ of the global best fit point. Only if we restrict to inverted mass ordering, CP violation is favored at the ∼ 3 σ level. We discuss the origin of these results — which are driven by the new data from the T2K and NOvA long-baseline experiments —, and the relevance of the LBL-reactor oscillation frequency complementarity. The previous 2 . 2 σ tension in ∆ m 2 21 preferred by KamLAND and solar experiments is also reduced to the 1 . 1 σ level after the inclusion of the latest Super-Kamiokande solar neutrino results. Finally we present updated allowed ranges for the oscillation parameters and for the leptonic Jarlskog determinant from the global analysis.
  5. The actual failure times of individual components are usually unavailable in many applications. Instead, only aggregate failure-time data are collected by actual users, due to technical and/or economic reasons. When dealing with such data for reliability estimation, practitioners often face the challenges of selecting the underlying failure-time distributions and the corresponding statistical inference methods. So far, only the exponential, normal, gamma and inverse Gaussian distributions have been used in analyzing aggregate failure-time data, due to these distributions having closed-form expressions for such data. However, the limited choices of probability distributions cannot satisfy extensive needs in a variety of engineering applications. PHase-type (PH) distributions are robust and flexible in modeling failure-time data, as they can mimic a large collection of probability distributions of non-negative random variables arbitrarily closely by adjusting the model structures. In this article, PH distributions are utilized, for the first time, in reliability estimation based on aggregate failure-time data. A Maximum Likelihood Estimation (MLE) method and a Bayesian alternative are developed. For the MLE method, an Expectation-Maximization algorithm is developed for parameter estimation, and the corresponding Fisher information is used to construct the confidence intervals for the quantities of interest. For the Bayesian method, a procedure for performingmore »point and interval estimation is also introduced. Numerical examples show that the proposed PH-based reliability estimation methods are quite flexible and alleviate the burden of selecting a probability distribution when the underlying failure-time distribution is general or even unknown.« less