skip to main content

Search for: All records

Creators/Authors contains: "Wang, P."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available August 1, 2022
  2. Free, publicly-accessible full text available June 1, 2023
  3. Abstract Prediction of ice formation in clouds presents one of the grand challenges in the atmospheric sciences. Immersion freezing initiated by ice-nucleating particles (INPs) is the dominant pathway of primary ice crystal formation in mixed-phase clouds, where supercooled water droplets and ice crystals coexist, with important implications for the hydrological cycle and climate. However, derivation of INP number concentrations from an ambient aerosol population in cloud-resolving and climate models remains highly uncertain. We conducted an aerosol–ice formation closure pilot study using a field-observational approach to evaluate the predictive capability of immersion freezing INPs. The closure study relies on collocated measurementsmore »of the ambient size-resolved and single-particle composition and INP number concentrations. The acquired particle data serve as input in several immersion freezing parameterizations, which are employed in cloud-resolving and climate models, for prediction of INP number concentrations. We discuss in detail one closure case study in which a front passed through the measurement site, resulting in a change of ambient particle and INP populations. We achieved closure in some circumstances within uncertainties, but we emphasize the need for freezing parameterization of potentially missing INP types and evaluation of the choice of parameterization to be employed. Overall, this closure pilot study aims to assess the level of parameter details and measurement strategies needed to achieve aerosol–ice formation closure. The closure approach is designed to accurately guide immersion freezing schemes in models, and ultimately identify the leading causes for climate model bias in INP predictions.« less
    Free, publicly-accessible full text available October 1, 2022
  4. This paper proposes a computer vision framework aimed to segment hot steel sections and contribute to rolling precision. The steel section dimensions are calculated for the purposes of automating a high temperature rolling process. A structured forest algorithm along with the developed steel bar edge detection and regression algorithms extract the edges of the high temperature bars in optical videos captured by a GoPror camera. To quantify the impact of noises that affect the segmentation process and the final diameter measurements, a weighted variance is calculated, providing a level of trust in the measurements. The results show an accuracy whichmore »is in line with the rolling standards, i.e. with a root mean square error less than 2:5 mm.« less
  5. Bayesian neural networks are powerful inference methods by accounting for randomness in the data and the network model. Uncertainty quantification at the output of neural networks is critical, especially for applications such as autonomous driving and hazardous weather forecasting. However, approaches for theoretical analysis of Bayesian neural networks remain limited. This paper makes a step forward towards mathematical quantification of uncertainty in neural network models and proposes a cubature-rule-based computationally efficient uncertainty quantification approach that captures layerwise uncertainties of Bayesian neural networks. The proposed approach approximates the first two moments of the posterior distribution of the parameters by propagating cubaturemore »points across the network nonlinearities. Simulation results show that the proposed approach can achieve more diverse layer-wise uncertainty quantification results of neural networks with a fast convergence rate.« less
  6. We present a database and analyze ground motions recorded during three events that occurred as part of the July 2019 Ridgecrest earthquake sequence: a moment magnitude (M) 6.5 foreshock on a left‐lateral cross fault in the Salt Wells Valley fault zone, an M 5.5 foreshock in the Paxton Ranch fault zone, and the M 7.1 mainshock, also occurring in the Paxton Ranch fault zone. We collected and uniformly processed 1483 three‐component recordings from an array of 824 sensors spanning 10 seismographic networks. We developed site metadata using available data and multiple models for the time‐averaged shear‐wave velocity in the uppermore »30 m (⁠VS30⁠) and for basin depth terms. We processed ground motions using Next Generation Attenuation (NGA) procedures and computed intensity measures including spectral acceleration at a number of oscillator periods and inelastic response spectra. We compared elastic and inelastic response spectra to seismic design spectra in building codes to evaluate the damage potential of the ground motions at spatially distributed sites. Residuals of the observed spectral accelerations relative to the NGA‐West2 ground‐motion models (GMMs) show good average agreement between observations and model predictions (event terms between about −0.3 and 0.5 for peak ground acceleration to 5 s). The average attenuation with distance is also well captured by the empirical NGA‐West2 GMMs, although azimuthal variations in attenuation were observed that are not captured by the GMMs. An analysis considering directivity and fault‐slip heterogeneity for the M 7.1 event demonstrates that the dispersion in the near‐source ground‐motion residuals can be reduced.« less
  7. Abstract The accurate simulation of additional interactions at the ATLAS experiment for the analysis of proton–proton collisions delivered by the Large Hadron Collider presents a significant challenge to the computing resources. During the LHC Run 2 (2015–2018), there were up to 70 inelastic interactions per bunch crossing, which need to be accounted for in Monte Carlo (MC) production. In this document, a new method to account for these additional interactions in the simulation chain is described. Instead of sampling the inelastic interactions and adding their energy deposits to a hard-scatter interaction one-by-one, the inelastic interactions are presampled, independent of the hardmore »scatter, and stored as combined events. Consequently, for each hard-scatter interaction, only one such presampled event needs to be added as part of the simulation chain. For the Run 2 simulation chain, with an average of 35 interactions per bunch crossing, this new method provides a substantial reduction in MC production CPU needs of around 20%, while reproducing the properties of the reconstructed quantities relevant for physics analyses with good accuracy.« less
    Free, publicly-accessible full text available December 1, 2023