skip to main content


The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 10:00 PM ET on Friday, December 8 until 2:00 AM ET on Saturday, December 9 due to maintenance. We apologize for the inconvenience.

Search for: All records

Creators/Authors contains: "Hazen, E."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Microparticles, such as microplastics and microfibers, are ubiquitous in marine food webs. Filter-feeding megafauna may be at extreme risk of exposure to microplastics, but neither the amount nor pathway of microplastic ingestion are well understood. Here, we combine depth-integrated microplastic data from the California Current Ecosystem with high-resolution foraging measurements from 191 tag deployments on blue, fin, and humpback whales to quantify plastic ingestion rates and routes of exposure. We find that baleen whales predominantly feed at depths of 50–250 m, coinciding with the highest measured microplastic concentrations in the pelagic ecosystem. Nearly all (99%) microplastic ingestion is predicted to occur via trophic transfer. We predict that fish-feeding whales are less exposed to microplastic ingestion than krill-feeding whales. Per day, a krill-obligate blue whale may ingest 10 million pieces of microplastic, while a fish-feeding humpback whale likely ingests 200,000 pieces of microplastic. For species struggling to recover from historical whaling alongside other anthropogenic pressures, our findings suggest that the cumulative impacts of multiple stressors require further attention.

    more » « less
  2. The largest animals are marine filter feeders, but the underlying mechanism of their large size remains unexplained. We measured feeding performance and prey quality to demonstrate how whale gigantism is driven by the interplay of prey abundance and harvesting mechanisms that increase prey capture rates and energy intake. The foraging efficiency of toothed whales that feed on single prey is constrained by the abundance of large prey, whereas filter-feeding baleen whales seasonally exploit vast swarms of small prey at high efficiencies. Given temporally and spatially aggregated prey, filter feeding provides an evolutionary pathway to extremes in body size that are not available to lineages that must feed on one prey at a time. Maximum size in filter feeders is likely constrained by prey availability across space and time. 
    more » « less
  3. Abstract The Pandora Software Development Kit and algorithm libraries provide pattern-recognition logic essential to the reconstruction of particle interactions in liquid argon time projection chamber detectors. Pandora is the primary event reconstruction software used at ProtoDUNE-SP, a prototype for the Deep Underground Neutrino Experiment far detector. ProtoDUNE-SP, located at CERN, is exposed to a charged-particle test beam. This paper gives an overview of the Pandora reconstruction algorithms and how they have been tailored for use at ProtoDUNE-SP. In complex events with numerous cosmic-ray and beam background particles, the simulated reconstruction and identification efficiency for triggered test-beam particles is above 80% for the majority of particle type and beam momentum combinations. Specifically, simulated 1 GeV/ c charged pions and protons are correctly reconstructed and identified with efficiencies of 86.1 $$\pm 0.6$$ ± 0.6 % and 84.1 $$\pm 0.6$$ ± 0.6 %, respectively. The efficiencies measured for test-beam data are shown to be within 5% of those predicted by the simulation. 
    more » « less
    Free, publicly-accessible full text available July 1, 2024
  4. Abstract The Large Hadron Collider (LHC) at CERN will undergo major upgrades to increase the instantaneous luminosity up to 5–7.5×10 34 cm -2 s -1 . This High Luminosity upgrade of the LHC (HL-LHC) will deliver a total of 3000–4000 fb -1 of proton-proton collisions at a center-of-mass energy of 13–14 TeV. To cope with these challenging environmental conditions, the strip tracker of the CMS experiment will be upgraded using modules with two closely-spaced silicon sensors to provide information to include tracking in the Level-1 trigger selection. This paper describes the performance, in a test beam experiment, of the first prototype module based on the final version of the CMS Binary Chip front-end ASIC before and after the module was irradiated with neutrons. Results demonstrate that the prototype module satisfies the requirements, providing efficient tracking information, after being irradiated with a total fluence comparable to the one expected through the lifetime of the experiment. 
    more » « less
    Free, publicly-accessible full text available April 1, 2024
  5. Free, publicly-accessible full text available June 1, 2024
  6. Free, publicly-accessible full text available May 1, 2024
  7. null (Ed.)
  8. Abstract The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype. 
    more » « less
    Free, publicly-accessible full text available April 1, 2024