skip to main content

This content will become publicly available on May 1, 2023

Title: Novel approach for evaluating detector-related uncertainties in a LArTPC using MicroBooNE data
Abstract Primary challenges for current and future precision neutrino experiments using liquid argon time projection chambers (LArTPCs) include understanding detector effects and quantifying the associated systematic uncertainties. This paper presents a novel technique for assessing and propagating LArTPC detector-related systematic uncertainties. The technique makes modifications to simulation waveforms based on a parameterization of observed differences in ionization signals from the TPC between data and simulation, while remaining insensitive to the details of the detector model. The modifications are then used to quantify the systematic differences in low- and high-level reconstructed quantities. This approach could be applied to future LArTPC detectors, such as those used in SBN and DUNE.
Authors:
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; « less
Award ID(s):
1913983 1801996
Publication Date:
NSF-PAR ID:
10336217
Journal Name:
The European Physical Journal C
Volume:
82
Issue:
5
ISSN:
1434-6052
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Jet energy scale and resolution measurements with their associated uncertainties are reported for jets using 36–81 fb $$^{-1}$$ - 1 of proton–proton collision data with a centre-of-mass energy of $$\sqrt{s}=13$$ s = 13   $${\text {Te}}{\text {V}}$$ TeV collected by the ATLAS detector at the LHC. Jets are reconstructed using two different input types: topo-clusters formed from energy deposits in calorimeter cells, as well as an algorithmic combination of charged-particle tracks with those topo-clusters, referred to as the ATLAS particle-flow reconstruction method. The anti- $$k_t$$ k t jet algorithm with radius parameter $$R=0.4$$ R = 0.4 is the primary jetmore »definition used for both jet types. This result presents new jet energy scale and resolution measurements in the high pile-up conditions of late LHC Run 2 as well as a full calibration of particle-flow jets in ATLAS. Jets are initially calibrated using a sequence of simulation-based corrections. Next, several in situ techniques are employed to correct for differences between data and simulation and to measure the resolution of jets. The systematic uncertainties in the jet energy scale for central jets ( $$|\eta |<1.2$$ | η | < 1.2 ) vary from 1% for a wide range of high- $$p_{{\text {T}}}$$ p T jets ( $$2502.5~{\text {Te}}{\text {V}}$$ > 2.5 TeV ). The relative jet energy resolution is measured and ranges from ( $$24 \pm 1.5$$ 24 ± 1.5 )% at 20  $${\text {Ge}}{\text {V}}$$ GeV to ( $$6 \pm 0.5$$ 6 ± 0.5 )% at 300  $${\text {Ge}}{\text {V}}$$ GeV .« less
  2. A bstract The MicroBooNE liquid argon time projection chamber located at Fermilab is a neutrino experiment dedicated to the study of short-baseline oscillations, the measurements of neutrino cross sections in liquid argon, and to the research and development of this novel detector technology. Accurate and precise measurements of calorimetry are essential to the event reconstruction and are achieved by leveraging the TPC to measure deposited energy per unit length along the particle trajectory, with mm resolution. We describe the non-uniform calorimetric reconstruction performance in the detector, showing dependence on the angle of the particle trajectory. Such non-uniform reconstruction directly affectsmore »the performance of the particle identification algorithms which infer particle type from calorimetric measurements. This work presents a new particle identification method which accounts for and effectively addresses such non-uniformity. The newly developed method shows improved performance compared to previous algorithms, illustrated by a 93.7% proton selection efficiency and a 10% muon mis-identification rate, with a fairly loose selection of tracks performed on beam data. The performance is further demonstrated by identifying exclusive final states in ν μ CC interactions. While developed using MicroBooNE data and simulation, this method is easily applicable to future LArTPC experiments, such as SBND, ICARUS, and DUNE.« less
  3. Abstract This paper describes a study of techniques for identifying Higgs bosons at high transverse momenta decaying into bottom-quark pairs, $$H \rightarrow b\bar{b}$$ H → b b ¯ , for proton–proton collision data collected by the ATLAS detector at the Large Hadron Collider at a centre-of-mass energy $$\sqrt{s}=13$$ s = 13   $$\text {TeV}$$ TeV . These decays are reconstructed from calorimeter jets found with the anti- $$k_{t}$$ k t $$R = 1.0$$ R = 1.0 jet algorithm. To tag Higgs bosons, a combination of requirements is used: b -tagging of $$R = 0.2$$ R = 0.2 track-jets matched tomore »the large- R calorimeter jet, and requirements on the jet mass and other jet substructure variables. The Higgs boson tagging efficiency and corresponding multijet and hadronic top-quark background rejections are evaluated using Monte Carlo simulation. Several benchmark tagging selections are defined for different signal efficiency targets. The modelling of the relevant input distributions used to tag Higgs bosons is studied in 36 fb $$^{-1}$$ - 1 of data collected in 2015 and 2016 using $$g\rightarrow b\bar{b}$$ g → b b ¯ and $$Z(\rightarrow b\bar{b})\gamma $$ Z ( → b b ¯ ) γ event selections in data. Both processes are found to be well modelled within the statistical and systematic uncertainties.« less
  4. Abstract We perform the first simultaneous Bayesian parameter inference and optimal reconstruction of the gravitational lensing of the cosmic microwave background (CMB), using 100 deg 2 of polarization observations from the SPTpol receiver on the South Pole Telescope. These data reach noise levels as low as 5.8 μ K arcmin in polarization, which are low enough that the typically used quadratic estimator (QE) technique for analyzing CMB lensing is significantly suboptimal. Conversely, the Bayesian procedure extracts all lensing information from the data and is optimal at any noise level. We infer the amplitude of the gravitational lensing potential to bemore »A ϕ = 0.949 ± 0.122 using the Bayesian pipeline, consistent with our QE pipeline result, but with 17% smaller error bars. The Bayesian analysis also provides a simple way to account for systematic uncertainties, performing a similar job as frequentist “bias hardening” or linear bias correction, and reducing the systematic uncertainty on A ϕ due to polarization calibration from almost half of the statistical error to effectively zero. Finally, we jointly constrain A ϕ along with A L , the amplitude of lensing-like effects on the CMB power spectra, demonstrating that the Bayesian method can be used to easily infer parameters both from an optimal lensing reconstruction and from the delensed CMB, while exactly accounting for the correlation between the two. These results demonstrate the feasibility of the Bayesian approach on real data, and pave the way for future analysis of deep CMB polarization measurements with SPT-3G, Simons Observatory, and CMB-S4, where improvements relative to the QE can reach 1.5 times tighter constraints on A ϕ and seven times lower effective lensing reconstruction noise.« less
  5. The Standards for educational and psychological assessment were developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education (AERA et al., 2014). The Standards specify assessment developers establish five types of validity evidence: test content, response processes, internal structure, relationship to other variables, and consequential/bias. Relevant to this proposal is consequential validity evidence that identifies the potential negative impact of testing or bias. Standard 3.1 of The Standards (2014) on fairness in testing states that “those responsible for test development, revision, and administration should design all steps of the testing process to promotemore »valid score interpretations for intended score uses for the widest possible range of individuals and relevant sub-groups in the intended populations” (p. 63). Three types of bias include construct, method, and item bias (Boer et al., 2018). Testing for differential item functioning (DIF) is a standard analysis adopted to detect item bias against a subgroup (Boer et al., 2018). Example subgroups include gender, race/ethnic group, socioeconomic status, native language, or disability. DIF is when “equally able test takers differ in their probabilities answering a test item correctly as a function of group membership” (AERA et al., 2005, p. 51). DIF indicates systematic error as compared to real mean group differences (Camilli & Shepard, 1994). Items exhibiting significant DIF are removed or reviewed for sources leading to bias to determine modifications to retain and further test an item. The Delphi technique is an emergent systematic research method whereby expert panel members review item content through an iterative process (Yildirim & Büyüköztürk, 2018). Experts independently evaluate each item for potential sources leading to DIF, researchers group their responses, and experts then independently complete a survey to rate their level of agreement with the anonymously grouped responses. This process continues until saturation and consensus are reached among experts as established through some criterion (e.g., median agreement rating, item quartile range, and percent agreement). The technique allows researchers to “identify, learn, and share the ideas of experts by searching for agreement among experts” (Yildirim & Büyüköztürk, 2018, p. 451). Research has illustrated this technique applied after DIF is detected, but not before administering items in the field. The current research is a methodological illustration of the Delphi technique applied in the item construction phase of assessment development as part of a five-year study to develop and test new problem-solving measures (PSM; Bostic et al., 2015, 2017) for U.S.A. grades 6-8 in a computer adaptive testing environment. As part of an iterative design-science-based methodology (Middleton et al., 2008), we illustrate the integration of the Delphi technique into the item writing process. Results from two three-person panels each reviewing a set of 45 PSM items are utilized to illustrate the technique. Advantages and limitations identified through a survey by participating experts and researchers are outlined to advance the method.« less