skip to main content

Search for: All records

Creators/Authors contains: "Qin, Y."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available October 1, 2022
  2. This paper proposes a post-experimental field data reuse method to test the single carrier modulation (SCM) and orthogonal frequency division multiplexing (OFDM) signals interchangeably for multiple access underwater acoustic (UWA) communications. We call this approach the cross evaluation that transforms a set of SCM or OFDM post-experimental field data to their corresponding OFDM or SCM scheme under test (SUT) via linear matrix operation such as fast Fourier transform (FFT) and its inverse (IFFT). At the receiver side, we derived a general framework of turbo equalization (TEQ) that alters the two physical layer schemes but keeps the passband transmitted and receivedmore »data unchanged. Inherently, some efficient techniques such as pre-cursor and post-cursor interference cancellation (IC), and overlap adding (OLA) operations enhance the equivalence of input and output (I/O) system model between the SCM and OFDM. The proposed approach will bring the gap between the SCM and OFDM, and evaluate the two physical layer schemes under similar or tougher test conditions. The experimental results of the undersea 2008 Surface Processes and Acoustic Communications Experiment (SPACE08) have verified the feasibility of the cross evaluation approach in terms of the BER benchmark.« less
  3. Large scientific facilities are unique and complex infrastructures that have become fundamental instruments for enabling high quality, world-leading research to tackle scientific problems at unprecedented scales. Cyberinfrastructure (CI) is an essential component of these facilities, providing the user community with access to data, data products, and services with the potential to transform data into knowledge. However, the timely evolution of the CI available at large facilities is challenging and can result in science communities requirements not being fully satisfied. Furthermore, integrating CI across multiple facilities as part of a scientific workflow is hard, resulting in data silos. In this paper,more »we explore how science gateways can provide improved user experiences and services that may not be offered at large facility datacenters. Using a science gateway supported by the Science Gateway Community Institute, which provides subscription-based delivery of streamed data and data products from the NSF Ocean Observatories Initiative (OOI), we propose a system that enables streaming-based capabilities and workflows using data from large facilities, such as the OOI, in a scalable manner. We leverage data infrastructure building blocks, such as the Virtual Data Collaboratory, which provides data and comput- ing capabilities in the continuum to efficiently and collaboratively integrate multiple data-centric CIs, build data-driven workflows, and connect large facilities data sources with NSF-funded CI, such as XSEDE. We also introduce architectural solutions for running these workflows using dynamically provisioned federated CI.« less
  4. Abstract The accurate simulation of additional interactions at the ATLAS experiment for the analysis of proton–proton collisions delivered by the Large Hadron Collider presents a significant challenge to the computing resources. During the LHC Run 2 (2015–2018), there were up to 70 inelastic interactions per bunch crossing, which need to be accounted for in Monte Carlo (MC) production. In this document, a new method to account for these additional interactions in the simulation chain is described. Instead of sampling the inelastic interactions and adding their energy deposits to a hard-scatter interaction one-by-one, the inelastic interactions are presampled, independent of the hardmore »scatter, and stored as combined events. Consequently, for each hard-scatter interaction, only one such presampled event needs to be added as part of the simulation chain. For the Run 2 simulation chain, with an average of 35 interactions per bunch crossing, this new method provides a substantial reduction in MC production CPU needs of around 20%, while reproducing the properties of the reconstructed quantities relevant for physics analyses with good accuracy.« less
    Free, publicly-accessible full text available December 1, 2023
  5. Abstract The ATLAS experiment at the Large Hadron Collider has a broad physics programme ranging from precision measurements to direct searches for new particles and new interactions, requiring ever larger and ever more accurate datasets of simulated Monte Carlo events. Detector simulation with Geant4 is accurate but requires significant CPU resources. Over the past decade, ATLAS has developed and utilized tools that replace the most CPU-intensive component of the simulation—the calorimeter shower simulation—with faster simulation methods. Here, AtlFast3, the next generation of high-accuracy fast simulation in ATLAS, is introduced. AtlFast3 combines parameterized approaches with machine-learning techniques and is deployed tomore »meet current and future computing challenges, and simulation needs of the ATLAS experiment. With highly accurate performance and significantly improved modelling of substructure within jets, AtlFast3 can simulate large numbers of events for a wide range of physics processes.« less
    Free, publicly-accessible full text available December 1, 2023
  6. Free, publicly-accessible full text available May 1, 2023