skip to main content

Title: Characterizing the dark count rate of a large-format MKID array

We present an empirical measurement of the dark count rate seen in a large-format MKID array identical to those currently in use at observatories such as Subaru on Maunakea. This work provides compelling evidence for their utility in future experiments that require low-count rate, quiet environments such as dark matter direct detection. Across the bandpass from 0.946-1.534 eV (1310-808 nm) an average count rate of (1.847 ± 0.003) × 10−3photons/pixel/s is measured. Breaking this bandpass into 5 equal-energy bins based on the resolving power of the detectors we find the average dark count rate seen in an MKID is (6.26 ± 0.04) × 10−4photons/pixel/s from 0.946-1.063 eV and (2.73 ± 0.02) × 10−4photons/pixel/s at 1.416-1.534eV. Using lower-noise readout electronics to read out a single MKID pixel we demonstrate that the events measured while the detector is not illuminated largely appear to be a combination of real photons, possible fluorescence caused by cosmic rays, and phonon events in the array substrate. We also find that using lower-noise readout electronics on a single MKID pixel we measure a dark count rate of (9.3 ± 0.9) × 10−4photons/pixel/s over the same bandpass (0.946-1.534 eV) With the single-pixel readout we also characterize the events when the detectors are not illuminated and show that these responses in the MKID are distinct more » from photons from known light sources such as a laser, likely coming from cosmic ray excitations.

« less
; ; ;
Publication Date:
Journal Name:
Optics Express
Page Range or eLocation-ID:
Article No. 10775
1094-4087; OPEXFF
Optical Society of America
Sponsoring Org:
National Science Foundation
More Like this
  1. Optical projection tomography (OPT) is a powerful imaging modality for attaining high resolution absorption and fluorescence imaging in tissue samples and embryos with a diameter of roughly 1 mm. Moving past this 1 mm limit, scattered light becomes the dominant fraction detected, adding significant “blur” to OPT. Time-domain OPT has been used to select out early-arriving photons that have taken a more direct route through the tissue to reduce detection of scattered photons in these larger samples, which are the cause of image domain blur1. In addition, it was recently demonstrated by our group that detection of scattered photons could be further depressed by running in a “deadtime” regime where laser repetition rates are selected such that the deadtime incurred by early-arriving photons acts as a shutter to later-arriving scattered photons2. By running in this deadtime regime, far greater early photon count rates are achievable than with standard early photon OPT. In this work, another advantage of this enhanced early photon collection approach is demonstrated: specifically, a significant improvement in signal-to-noise ratio. In single photon counting detectors, the main source of noise is “afterpulsing,” which is essentially leftover charge from a detected photon that spuriously results in a second photonmore »count. When the arrival of the photons are time-stamped by the time correlated single photon counting (TCSPC) module , the rate constant governing afterpusling is slow compared to the time-scale of the light pulse detected so it is observed as a background signal with very little time-correlation. This signal is present in all time-gates and so adds noise to the detection of early photons. However, since the afterpusling signal is proportional to the total rate of photon detection, our enhanced early photon approach is uniquely able to have increased early photon counts with no appreciable increase in the afterpulsing since overall count-rate does not change. This is because as the rate of early photon detection goes up, the rate of late-photon detection reduces commensurately, yielding no net change in the overall rate of photons detected. This hypothesis was tested on a 4 mm diameter tissue-mimicking phantom (μa = 0.02 mm-1, μs’ = 1 mm-1) by ranging the power of a 10 MHz pulse 780-nm laser with pulse spread of < 100 fs (Calmar, USA) and an avalanche photodiode (MPD, Picoquant, Germany) and TCSPC module (HydraHarp, Picoquant, Germany) for light detection. Details of the results are in Fig. 1a, but of note is that we observed more than a 60-times improvement in SNR compared to conventional early photon detection that would have taken 1000-times longer to achieve the same early photon count. A demonstration of the type of resolution possible is in Fig 1b with an image of a 4-mm-thick human breast cancer biopsy where tumor spiculations of less than 100 μm diameter are observable. 1Fieramonti, L. et al. PloS one (2012). 2Sinha, L., et al. Optics letters (2016).« less
  2. Zmuidzinas, Jonas ; Gao, Jian-Rong (Ed.)
    The Cosmology Large Angular Scale Surveyor (CLASS) is a polarization-sensitive telescope array located at an altitude of 5,200 m in the Chilean Atacama Desert. CLASS is designed to measure "E-mode" (even parity) and "B-mode" (odd parity) polarization patterns in the Cosmic Microwave Background (CMB) over large angular scales with the aim of improving our understanding of inflation, reionization, and dark matter. CLASS is currently observing with three telescopes covering four frequency bands: one at 40 GHz (Q); one at 90 GHz (W1); and one dichroic system at 150/220 GHz (G). In these proceedings, we discuss the updated design and in-lab characterization of new 90 GHz detectors. The new detectors include design changes to the transition-edge sensor (TES) bolometer architecture, which aim to improve stability and optical efficiency. We assembled and tested four new detector wafers, to replace four modules of the W1 focal plane. These detectors were installed into the W1 telescope, and will achieve first light in the austral winter of 2022. We present electrothermal parameters and bandpass measurements from in-lab dark and optical testing. From in-lab dark tests, we also measure a median NEP of 12.3 aW√ s across all four wafers about the CLASS signal band, whichmore »is below the expected photon NEP of 32 aW√ s from the field. We therefore expect the new detectors to be photon noise limited.« less
  3. Ultra-high-energy (UHE) photons are an important tool for studying the high-energy Universe. A plausible source of photons with exa-eV (EeV) energy is provided by UHE cosmic rays (UHECRs) undergoing the Greisen–Zatsepin–Kuzmin process (Greisen 1966; Zatsepin & Kuzmin 1966) or pair production process (Blumenthal 1970) on a cosmic background radiation. In this context, the EeV photons can be a probe of both UHECR mass composition and the distribution of their sources (Gelmini, Kalashev & Semikoz 2008; Hooper, Taylor & Sarkar 2011). At the same time, the possible flux of photons produced by UHE protons in the vicinity of their sources by pion photoproduction or inelastic nuclear collisions would be noticeable only for relatively near sources, as the attenuation length of UHE photons is smaller than that of UHE protons; see, for example, Bhattacharjee & Sigl (2000) for a review. There also exists a class of so-called top-down models of UHECR generation that efficiently produce the UHE photons, for instance by the decay of heavy dark-matter particles (Berezinsky, Kachelriess & Vilenkin 1997; Kuzmin & Rubakov 1998) or by the radiation from cosmic strings (Berezinsky, Blasi & Vilenkin 1998). The search for the UHE photons was shown to be the most sensitive methodmore »of indirect detection of heavy dark matter (Kalashev & Kuznetsov 2016, 2017; Kuznetsov 2017; Kachelriess, Kalashev & Kuznetsov 2018; Alcantara, Anchordoqui & Soriano 2019). Another fundamental physics scenario that could be tested with UHE photons (Fairbairn, Rashba & Troitsky 2011) is the photon mixing with axion-like particles (Raffelt & Stodolsky 1988), which could be responsible for the correlation of UHECR events with BL Lac type objects observed by the High Resolution Fly’s Eye (HiRes) experiment (Gorbunov et al. 2004; Abbasi et al. 2006). In most of these scenarios, a clustering of photon arrival directions, rather than diffuse distribution, is expected, so point-source searches can be a suitable test for photon - axion-like particle mixing models. Finally, UHE photons could also be used as a probe for the models of Lorentz-invariance violation (Coleman & Glashow 1999; Galaverni & Sigl 2008; Maccione, Liberati & Sigl 2010; Rubtsov, Satunin & Sibiryakov 2012, 2014). The Telescope Array (TA; Tokuno et al. 2012; Abu-Zayyad et al. 2013c) is the largest cosmic ray experiment in the Northern Hemisphere. It is located at 39.3° N, 112.9° W in Utah, USA. The observatory includes a surface detector array (SD) and 38 fluorescence telescopes grouped into three stations. The SD consists of 507 stations that contain plastic scintillators, each with an area of 3 m2 (SD stations). The stations are placed in the square grid with 1.2 km spacing and cover an area of ∼700 km2. The TA SD is capable of detecting extensive air showers (EASs) in the atmosphere caused by cosmic particles of EeV and higher energies. The TA SD has been operating since 2008 May. A hadron-induced EAS significantly differs from an EAS induced by a photon because the depth of the shower maximum Xmax for a photon shower is larger, and a photon shower contains fewer muons and has a more curved front (see Risse & Homola 2007 for a review). The TA SD stations are sensitive to both muon and electromagnetic components of the shower and therefore can be triggered by both hadron-induced and photon-induced EAS events. In the present study, we use 9 yr of TA SD data for a blind search for point sources of UHE photons. We utilize the statistics of the SD data, which benefit from a high duty cycle. The full Monte Carlo (MC) simulation of proton-induced and photon-induced EAS events allows us to perform the photon search up to the highest accessible energies, E ≳ 1020 eV. As the main tool for the present photon search, we use a multivariate analysis based on a number of SD parameters that make it possible to distinguish between photon and hadron primaries. While searches for diffuse UHE photons were performed by several EAS experiments, including Haverah Park (Ave et al. 2000), AGASA (Shinozaki et al. 2002; Risse et al. 2005), Yakutsk (Rubtsov et al. 2006; Glushkov et al. 2007, 2010), Pierre Auger (Abraham et al. 2007, 2008a; Bleve 2016; Aab et al. 2017c) and TA (Abu-Zayyad et al. 2013b; Abbasi et al. 2019a), the search for point sources of UHE photons has been done only by the Pierre Auger Observatory (Aab et al. 2014, 2017a). The latter searches were based on hybrid data and were limited to the 1017.3 < E < 1018.5 eV energy range. In the present paper, we use the TA SD data alone. We perform the searches in five energy ranges: E > 1018, E > 1018.5, E > 1019, E > 1019.5 and E > 1020 eV. We find no significant evidence of photon point sources in all energy ranges and we set the point-source flux upper limits from each direction in the TA field of view (FOV). The search for unspecified neutral particles was also previously performed by the TA (Abbasi et al. 2015). The limit on the point-source flux of neutral particles obtained in that work is close to the present photon point-source flux limits.« less
  4. We report on a measurement of the cosmic-ray composition by the Telescope Array Low-energy Extension (TALE) air fluorescence detector (FD). By making use of the Cherenkov light signal in addition to air fluorescence light from cosmic-ray (CR)-induced extensive air showers, the TALE FD can measure the properties of the cosmic rays with energies as low as ~2 PeV and exceeding 1 EeV. In this paper, we present results on the measurement of ${X}_{\max }$ distributions of showers observed over this energy range. Data collected over a period of ~4 yr were analyzed for this study. The resulting ${X}_{\max }$ distributions are compared to the Monte Carlo (MC) simulated data distributions for primary cosmic rays with varying composition and a four-component fit is performed. The comparison and fit are performed for energy bins, of width 0.1 or 0.2 in ${\mathrm{log}}_{10}(E/\mathrm{eV})$, spanning the full range of the measured energies. We also examine the mean ${X}_{\max }$ value as a function of energy for cosmic rays with energies greater than 1015.8 eV. Below 1017.3 eV, the slope of the mean ${X}_{\max }$ as a function of energy (the elongation rate) for the data is significantly smaller than that of all elements in themore »models, indicating that the composition is becoming heavier with energy in this energy range. This is consistent with a rigidity-dependent cutoff of events from Galactic sources. Finally, an increase in the ${X}_{\max }$ elongation rate is observed at energies just above 1017 eV, indicating another change in the cosmic-ray composition.« less
  5. Abstract IceCube, a cubic-kilometer array of optical sensors built to detect atmospheric and astrophysical neutrinos between 1 GeV and 1 PeV, is deployed 1.45 km to 2.45 km below the surface of the ice sheet at the South Pole. The classification and reconstruction of events from the in-ice detectors play a central role in the analysis of data from IceCube. Reconstructing and classifying events is a challenge due to the irregular detector geometry, inhomogeneous scattering and absorption of light in the ice and, below 100 GeV, the relatively low number of signal photons produced per event. To address this challenge, it is possible to represent IceCube events as point cloud graphs and use a Graph Neural Network (GNN) as the classification and reconstruction method. The GNN is capable of distinguishing neutrino events from cosmic-ray backgrounds, classifying different neutrino event types, and reconstructing the deposited energy, direction and interaction vertex. Based on simulation, we provide a comparison in the 1 GeV–100 GeV energy range to the current state-of-the-art maximum likelihood techniques used in current IceCube analyses, including the effects of known systematic uncertainties. For neutrino event classification, the GNN increases the signal efficiency by 18% at a fixed background rate, comparedmore »to current IceCube methods. Alternatively, the GNN offers a reduction of the background (i.e. false positive) rate by over a factor 8 (to below half a percent) at a fixed signal efficiency. For the reconstruction of energy, direction, and interaction vertex, the resolution improves by an average of 13%–20% compared to current maximum likelihood techniques in the energy range of 1 GeV–30 GeV. The GNN, when run on a GPU, is capable of processing IceCube events at a rate nearly double of the median IceCube trigger rate of 2.7 kHz, which opens the possibility of using low energy neutrinos in online searches for transient events.« less