skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Schwarz, D."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. While microplastics (MPs) are globally prevalent in marine environments, extending to the Arctic and sub-Arctic regions, the extent and distribution of MPs in terrestrial waters, drinking water sources, and recreational water in these areas remain unknown. This field study establishes a baseline for MPs in surface water sources, including lakes, rivers, and creeks, as well as in snow across three geo-locations (i.e., Far North, Interior, and Southcentral) in Alaska. Results (mean ± SE) show that the highest MP counts exist in snow (681 ± 45 L−1), followed by lakes (361 ± 76 L−1), creeks (377 ± 88 L−1), and rivers (359 ± 106 L−1). The smallest MPs (i.e., 90.6 ± 4 μm) also happened to have occurred in snow, followed by their larger sizes in lakes (203.9 ± 65 μm), creeks (382.8 ± 136.5 μm), and rivers (455.4 ± 212 μm). The physical morphology of MPs varies widely. MP fragments are predominant (i.e., nearly 62–74%) in these sites, while MP fibers (nearly 13–21%), pellets (nearly 13–18%), and films (<6%) also exist in appreciable quantities. Geolocation-wise, the Far North, where MPs were collected from off-road locations, shows the highest MP counts (695 ± 58 L−1), compared to Interior (473 ± 64 L−1) and Southcentral (447 ± 62 L−1) Alaska. Results also indicate that the occurrence of MPs in the source waters and snow decreases with increasing distance from the nearest coastlines and towns or communities. These baseline observations of MPs in terrestrial waters and precipitation across Alaska indicate MP pollution even in less-explored environments. This can be seen as a cause for concern with regard to MP exposure and risks in the region and beyond. 
    more » « less
    Free, publicly-accessible full text available May 14, 2025
  2. Context.High-precision pulsar timing is highly dependent on the precise and accurate modelling of any effects that can potentially impact the data. In particular, effects that contain stochastic elements contribute to some level of corruption and complexity in the analysis of pulsar-timing data. It has been shown that commonly used solar wind models do not accurately account for variability in the amplitude of the solar wind on both short and long timescales. Aims.In this study, we test and validate a new, cutting-edge solar wind modelling method included in theenterprisesoftware suite (widely used for pulsar noise analysis) through extended simulations. We use it to investigate temporal variability in LOFAR data. Our model testing scheme in itself provides an invaluable asset for pulsar timing array (PTA) experiments. Since, improperly accounting for the solar wind signature in pulsar data can induce false-positive signals, it is of fundamental importance to include in any such investigations. Methods.We employed a Bayesian approach utilising a continuously varying Gaussian process to model the solar wind. It uses a spherical approximation that modulates the electron density. This method, which we refer to as a solar wind Gaussian process (SWGP), has been integrated into existing noise analysis software, specificallyenterprise. Our Validation of this model was performed through simulations. We then conduct noise analysis on eight pulsars from the LOFAR dataset, with most pulsars having a time span of ∼11 years encompassing one full solar activity cycle. Furthermore, we derived the electron densities from the dispersion measure values obtained by the SWGP model. Results.Our analysis reveals a strong correlation between the electron density at 1 AU and the ecliptic latitude (ELAT) of the pulsar. Pulsars with |ELAT|< 3° exhibit significantly higher average electron densities. Furthermore, we observed distinct temporal patterns in the electron densities in different pulsars. In particular, pulsars within |ELAT|< 3° exhibit similar temporal variations, while the electron densities of those outside this range correlate with the solar activity cycle. Notably, some pulsars exhibit sensitivity to the solar wind up to 45° away from the Sun in LOFAR data. Conclusions.The continuous variability in electron density offered in this model represents a substantial improvement over previous models, that assume a single value for piece-wise bins of time. This advancement holds promise for solar wind modelling in future International Pulsar Timing Array (IPTA) data combinations. 
    more » « less
    Free, publicly-accessible full text available December 1, 2025
  3. Aims. We combined the LOw-Frequency ARray (LOFAR) Two-metre Sky Survey (LoTSS) second data release (DR2) catalogue with gravitational lensing maps from the cosmic microwave background (CMB) to place constraints on the bias evolution of LoTSS-detected radio galaxies, and on the amplitude of matter perturbations. Methods. We constructed a flux-limited catalogue from LoTSS DR2, and analysed its harmonic-space cross-correlation with CMB lensing maps fromPlanck,Cgk, as well as its auto-correlation,Cgg. We explored the models describing the redshift evolution of the large-scale radio galaxy bias, discriminating between them through the combination of bothCgkandCgg. Fixing the bias evolution, we then used these data to place constraints on the amplitude of large-scale density fluctuations, parametrised byσ8. Results. We report the significance of theCgksignal at a level of 26.6σ. We determined that a linear bias evolution of the formbg(z) =bg,D/D(z), whereD(z) is the growth rate, is able to provide a good description of the data, and we measuredbg,D= 1.41 ± 0.06 for a sample that is flux limited at 1.5 mJy, for scalesℓ< 250 forCgg, andℓ< 500 forCgk. At the sample’s median redshift, we obtainedb(z= 0.82) = 2.34 ± 0.10. Usingσ8as a free parameter, while keeping other cosmological parameters fixed to thePlanckvalues, we found fluctuations of σ8= 0.75−0.04+0.05. The result is in agreement with weak lensing surveys, and at 1σdifference withPlanckCMB constraints. We also attempted to detect the late-time-integrated Sachs-Wolfe effect with LOFAR data; however, with the current sky coverage, the cross-correlation with CMB temperature maps is consistent with zero. Our results are an important step towards constraining cosmology with radio continuum surveys from LOFAR and other future large radio surveys. 
    more » « less
  4. ABSTRACT Covering $$\sim 5600\, \deg ^2$$ to rms sensitivities of ∼70−100 $$\mu$$Jy beam−1, the LOFAR Two-metre Sky Survey Data Release 2 (LoTSS-DR2) provides the largest low-frequency (∼150 MHz) radio catalogue to date, making it an excellent tool for large-area radio cosmology studies. In this work, we use LoTSS-DR2 sources to investigate the angular two-point correlation function of galaxies within the survey. We discuss systematics in the data and an improved methodology for generating random catalogues, compared to that used for LoTSS-DR1, before presenting the angular clustering for ∼900 000 sources ≥1.5 mJy and a peak signal-to-noise ≥ 7.5 across ∼80 per cent of the observed area. Using the clustering, we infer the bias assuming two evolutionary models. When fitting angular scales of $$0.5 \le \theta \lt 5{^\circ }$$, using a linear bias model, we find LoTSS-DR2 sources are biased tracers of the underlying matter, with a bias of $$b_{\rm C}= 2.14^{+0.22}_{-0.20}$$ (assuming constant bias) and $$b_{\rm E}(z=0)= 1.79^{+0.15}_{-0.14}$$ (for an evolving model, inversely proportional to the growth factor), corresponding to $$b_{\rm E}= 2.81^{+0.24}_{-0.22}$$ at the median redshift of our sample, assuming the LoTSS Deep Fields redshift distribution is representative of our data. This reduces to $$b_{\rm C}= 2.02^{+0.17}_{-0.16}$$ and $$b_{\rm E}(z=0)= 1.67^{+0.12}_{-0.12}$$ when allowing preferential redshift distributions from the Deep Fields to model our data. Whilst the clustering amplitude is slightly lower than LoTSS-DR1 (≥2 mJy), our study benefits from larger samples and improved redshift estimates. 
    more » « less
  5. Free, publicly-accessible full text available January 1, 2026
  6. A<sc>bstract</sc> A measurement is performed of Higgs bosons produced with high transverse momentum (pT) via vector boson or gluon fusion in proton-proton collisions. The result is based on a data set with a center-of-mass energy of 13 TeV collected in 2016–2018 with the CMS detector at the LHC and corresponds to an integrated luminosity of 138 fb−1. The decay of a high-pTHiggs boson to a boosted bottom quark-antiquark pair is selected using large-radius jets and employing jet substructure and heavy-flavor taggers based on machine learning techniques. Independent regions targeting the vector boson and gluon fusion mechanisms are defined based on the topology of two quark-initiated jets with large pseudorapidity separation. The signal strengths for both processes are extracted simultaneously by performing a maximum likelihood fit to data in the large-radius jet mass distribution. The observed signal strengths relative to the standard model expectation are$$ {4.9}_{-1.6}^{+1.9} $$ 4.9 1.6 + 1.9 and$$ {1.6}_{-1.5}^{+1.7} $$ 1.6 1.5 + 1.7 for the vector boson and gluon fusion mechanisms, respectively. A differential cross section measurement is also reported in the simplified template cross section framework. 
    more » « less
    Free, publicly-accessible full text available December 1, 2025
  7. Abstract Computing demands for large scientific experiments, such as the CMS experiment at the CERN LHC, will increase dramatically in the next decades. To complement the future performance increases of software running on central processing units (CPUs), explorations of coprocessor usage in data processing hold great potential and interest. Coprocessors are a class of computer processors that supplement CPUs, often improving the execution of certain functions due to architectural design choices. We explore the approach of Services for Optimized Network Inference on Coprocessors (SONIC) and study the deployment of this as-a-service approach in large-scale data processing. In the studies, we take a data processing workflow of the CMS experiment and run the main workflow on CPUs, while offloading several machine learning (ML) inference tasks onto either remote or local coprocessors, specifically graphics processing units (GPUs). With experiments performed at Google Cloud, the Purdue Tier-2 computing center, and combinations of the two, we demonstrate the acceleration of these ML algorithms individually on coprocessors and the corresponding throughput improvement for the entire workflow. This approach can be easily generalized to different types of coprocessors and deployed on local CPUs without decreasing the throughput performance. We emphasize that the SONIC approach enables high coprocessor usage and enables the portability to run workflows on different types of coprocessors. 
    more » « less
    Free, publicly-accessible full text available December 1, 2025
  8. Abstract A search is reported for charge-parity$$CP$$ CP violation in$${{{\textrm{D}}}^{{0}}} \rightarrow {{\textrm{K}} _{\text {S}}^{{0}}} {{\textrm{K}} _{\text {S}}^{{0}}} $$ D 0 K S 0 K S 0 decays, using data collected in proton–proton collisions at$$\sqrt{s} = 13\,\text {Te}\hspace{-.08em}\text {V} $$ s = 13 Te V recorded by the CMS experiment in 2018. The analysis uses a dedicated data set that corresponds to an integrated luminosity of 41.6$$\,\text {fb}^{-1}$$ fb - 1 , which consists of about 10 billion events containing a pair of b hadrons, nearly all of which decay to charm hadrons. The flavor of the neutral D meson is determined by the pion charge in the reconstructed decays$${{{\textrm{D}}}^{{*+}}} \rightarrow {{{\textrm{D}}}^{{0}}} {{{\mathrm{\uppi }}}^{{+}}} $$ D + D 0 π + and$${{{\textrm{D}}}^{{*-}}} \rightarrow {\overline{{\textrm{D}}}^{{0}}} {{{\mathrm{\uppi }}}^{{-}}} $$ D - D ¯ 0 π - . The$$CP$$ CP asymmetry in$${{{\textrm{D}}}^{{0}}} \rightarrow {{\textrm{K}} _{\text {S}}^{{0}}} {{\textrm{K}} _{\text {S}}^{{0}}} $$ D 0 K S 0 K S 0 is measured to be$$A_{CP} ({{\textrm{K}} _{\text {S}}^{{0}}} {{\textrm{K}} _{\text {S}}^{{0}}} ) = (6.2 \pm 3.0 \pm 0.2 \pm 0.8)\%$$ A CP ( K S 0 K S 0 ) = ( 6.2 ± 3.0 ± 0.2 ± 0.8 ) % , where the three uncertainties represent the statistical uncertainty, the systematic uncertainty, and the uncertainty in the measurement of the$$CP$$ CP asymmetry in the$${{{\textrm{D}}}^{{0}}} \rightarrow {{\textrm{K}} _{\text {S}}^{{0}}} {{{\mathrm{\uppi }}}^{{+}}} {{{\mathrm{\uppi }}}^{{-}}} $$ D 0 K S 0 π + π - decay. This is the first$$CP$$ CP asymmetry measurement by CMS in the charm sector as well as the first to utilize a fully hadronic final state. 
    more » « less
    Free, publicly-accessible full text available December 1, 2025
  9. Abstract This paper describes theCombinesoftware package used for statistical analyses by the CMS Collaboration. The package, originally designed to perform searches for a Higgs boson and the combined analysis of those searches, has evolved to become the statistical analysis tool presently used in the majority of measurements and searches performed by the CMS Collaboration. It is not specific to the CMS experiment, and this paper is intended to serve as a reference for users outside of the CMS Collaboration, providing an outline of the most salient features and capabilities. Readers are provided with the possibility to runCombineand reproduce examples provided in this paper using a publicly available container image. Since the package is constantly evolving to meet the demands of ever-increasing data sets and analysis sophistication, this paper cannot cover all details ofCombine. However, the online documentation referenced within this paper provides an up-to-date and complete user guide. 
    more » « less
    Free, publicly-accessible full text available December 1, 2025
  10. Abstract The CERN LHC provided proton and heavy ion collisions during its Run 2 operation period from 2015 to 2018. Proton-proton collisions reached a peak instantaneous luminosity of 2.1× 1034cm-2s-1, twice the initial design value, at √(s)=13 TeV. The CMS experiment records a subset of the collisions for further processing as part of its online selection of data for physics analyses, using a two-level trigger system: the Level-1 trigger, implemented in custom-designed electronics, and the high-level trigger, a streamlined version of the offline reconstruction software running on a large computer farm. This paper presents the performance of the CMS high-level trigger system during LHC Run 2 for physics objects, such as leptons, jets, and missing transverse momentum, which meet the broad needs of the CMS physics program and the challenge of the evolving LHC and detector conditions. Sophisticated algorithms that were originally used in offline reconstruction were deployed online. Highlights include a machine-learning b tagging algorithm and a reconstruction algorithm for tau leptons that decay hadronically. 
    more » « less
    Free, publicly-accessible full text available November 1, 2025