skip to main content


Title: Emergence of pseudo-time during optimal Monte Carlo sampling and temporal aspects of symmetry breaking and restoration

We argue that one can associate a pseudo-time with sequences of configurations generated in the course of classical Monte Carlo simulations for a single-minimum bound state if the sampling is optimal. Hereby, the sampling rates can be, under special circumstances, calibrated against the relaxation rate and frequency of motion of an actual physical system. The latter possibility is linked to the optimal sampling regime being a universal crossover separating two distinct suboptimal sampling regimes analogous to the physical phenomena of diffusion and effusion, respectively. Bound states break symmetry; one may thus regard the pseudo-time as a quantity emerging together with the bound state. Conversely, when transport among distinct bound states takes place—thus restoring symmetry—a pseudo-time can no longer be defined. One can still quantify activation barriers if the latter barriers are smooth, but simulation becomes impractically slow and pertains to overdamped transport only. Specially designed Monte Carlo moves that bypass activation barriers—so as to accelerate sampling of the thermodynamics—amount to effusive transport and lead to severe under-sampling of transition-state configurations that separate distinct bound states while destroying the said universality. Implications of the present findings for simulations of glassy liquids are discussed.

 
more » « less
Award ID(s):
1956389
NSF-PAR ID:
10403693
Author(s) / Creator(s):
;
Publisher / Repository:
American Institute of Physics
Date Published:
Journal Name:
The Journal of Chemical Physics
Volume:
158
Issue:
12
ISSN:
0021-9606
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Sequential decision-making under uncertainty is present in many important problems. Two popular approaches for tackling such problems are reinforcement learning and online search (e.g., Monte Carlo tree search). While the former learns a policy by interacting with the environment (typically done before execution), the latter uses a generative model of the environment to sample promising action trajectories at decision time. Decision-making is particularly challenging in non-stationary environments, where the environment in which an agent operates can change over time. Both approaches have shortcomings in such settings -- on the one hand, policies learned before execution become stale when the environment changes and relearning takes both time and computational effort. Online search, on the other hand, can return sub-optimal actions when there are limitations on allowed runtime. In this paper, we introduce \textit{Policy-Augmented Monte Carlo tree search} (PA-MCTS), which combines action-value estimates from an out-of-date policy with an online search using an up-to-date model of the environment. We prove theoretical results showing conditions under which PA-MCTS selects the one-step optimal action and also bound the error accrued while following PA-MCTS as a policy. We compare and contrast our approach with AlphaZero, another hybrid planning approach, and Deep Q Learning on several OpenAI Gym environments. Through extensive experiments, we show that under non-stationary settings with limited time constraints, PA-MCTS outperforms these baselines. 
    more » « less
  2. Abstract

    Accurate delineation of compound flood hazard requires joint simulation of rainfall‐runoff and storm surges within high‐resolution flood models, which may be computationally expensive. There is a need for supplementing physical models with efficient, probabilistic methodologies for compound flood hazard assessment that can be applied under a range of climate and environment conditions. Here we propose an extension to the joint probability optimal sampling method (JPM‐OS), which has been widely used for storm surge assessment, and apply it for rainfall‐surge compound hazard assessment under climate change at the catchment‐scale. We utilize thousands of synthetic tropical cyclones (TCs) and physics‐based models to characterize storm surge and rainfall hazards at the coast. Then we implement a Bayesian quadrature optimization approach (JPM‐OS‐BQ) to select a small number (∼100) of storms, which are simulated within a high‐resolution flood model to characterize the compound flood hazard. We show that the limited JPM‐OS‐BQ simulations can capture historical flood return levels within 0.25 m compared to a high‐fidelity Monte Carlo approach. We find that the combined impact of 2100 sea‐level rise (SLR) and TC climatology changes on flood hazard change in the Cape Fear Estuary, NC will increase the 100‐year flood extent by 27% and increase inundation volume by 62%. Moreover, we show that probabilistic incorporation of SLR in the JPM‐OS‐BQ framework leads to different 100‐year flood maps compared to using a single mean SLR projection. Our framework can be applied to catchments across the United States Atlantic and Gulf coasts under a variety of climate and environment scenarios.

     
    more » « less
  3. Faceted nanoparticles can be used as building blocks to assemble nanomaterials with exceptional optical and catalytic properties. Recent studies have shown that surface functionalization of such nanoparticles with organic molecules, polymer chains, or DNA can be used to control the separation distance and orientation of particles within their assemblies. In this study, we computationally investigate the mechanism of assembly of nanocubes grafted with short-chain molecules. Our approach involves computing the interaction free energy landscape of a pair of such nanocubes via Monte Carlo simulations and using the Dijkstra algorithm to determine the minimum free energy pathway connecting key states in the landscape. We find that the assembly pathway of nanocubes is very rugged involving multiple energy barriers and metastable states. Analysis of nanocube configurations along the pathway reveals that the assembly mechanism is dominated by sliding motion of nanocubes relative to each other punctuated by their local dissociation at grafting points involving lineal separation and rolling motions. The height of energy barriers between metastable states depends on factors such as the interaction strength and surface roughness of the nanocubes and the steric repulsion from the grafts. These results imply that the observed assembly configuration of nanocubes depends not only on their globally stable minimum free energy state but also on the assembly pathway leading to this state. The free energy landscapes and assembly pathways presented in this study along with the proposed guidelines for engineering such pathways should be useful to researchers aiming to achieve uniform nanostructures from self-assembly of faceted nanoparticles. 
    more » « less
  4. Abstract Background

    No versatile web app exists that allows epidemiologists and managers around the world to comprehensively analyze the impacts of COVID-19 mitigation. Thehttp://covid-webapp.numerusinc.com/web app presented here fills this gap.

    Methods

    Our web app uses a model that explicitly identifies susceptible, contact, latent, asymptomatic, symptomatic and recovered classes of individuals, and a parallel set of response classes, subject to lower pathogen-contact rates. The user inputs a CSV file of incidence and, if of interest, mortality rate data. A default set of parameters is available that can be overwritten through input or online entry, and a user-selected subset of these can be fitted to the model using maximum-likelihood estimation (MLE). Model fitting and forecasting intervals are specifiable and changes to parameters allow counterfactual and forecasting scenarios. Confidence or credible intervals can be generated using stochastic simulations, based on MLE values, or on an inputted CSV file containing Markov chain Monte Carlo (MCMC) estimates of one or more parameters.

    Results

    We illustrate the use of our web app in extracting social distancing, social relaxation, surveillance or virulence switching functions (i.e., time varying drivers) from the incidence and mortality rates of COVID-19 epidemics in Israel, South Africa, and England. The Israeli outbreak exhibits four distinct phases: initial outbreak, social distancing, social relaxation, and a second wave mitigation phase. An MCMC projection of this latter phase suggests the Israeli epidemic will continue to produce into late November an average of around 1500 new case per day, unless the population practices social-relaxation measures at least 5-fold below the level in August, which itself is 4-fold below the level at the start of July. Our analysis of the relatively late South African outbreak that became the world’s fifth largest COVID-19 epidemic in July revealed that the decline through late July and early August was characterised by a social distancing driver operating at more than twice the per-capita applicable-disease-class (pc-adc) rate of the social relaxation driver. Our analysis of the relatively early English outbreak, identified a more than 2-fold improvement in surveillance over the course of the epidemic. It also identified a pc-adc social distancing rate in early August that, though nearly four times the pc-adc social relaxation rate, appeared to barely contain a second wave that would break out if social distancing was further relaxed.

    Conclusion

    Our web app provides policy makers and health officers who have no epidemiological modelling or computer coding expertise with an invaluable tool for assessing the impacts of different outbreak mitigation policies and measures. This includes an ability to generate an epidemic-suppression or curve-flattening index that measures the intensity with which behavioural responses suppress or flatten the epidemic curve in the region under consideration.

     
    more » « less
  5. Abstract

    Numerical simulations of neutron star–neutron star and neutron star–black hole binaries play an important role in our ability to model gravitational-wave and electromagnetic signals powered by these systems. These simulations have to take into account a wide range of physical processes including general relativity, magnetohydrodynamics, and neutrino radiation transport. The latter is particularly important in order to understand the properties of the matter ejected by many mergers, the optical/infrared signals powered by nuclear reactions in the ejecta, and the contribution of that ejecta to astrophysical nucleosynthesis. However, accurate evolutions of the neutrino transport equations that include all relevant physical processes remain beyond our current reach. In this review, I will discuss the current state of neutrino modeling in general relativistic simulations of neutron star mergers and of their post-merger remnants. I will focus on the three main types of algorithms used in simulations so far: leakage, moments, and Monte-Carlo scheme. I will review the advantages and limitations of each scheme, as well as the various neutrino–matter interactions that should be included in simulations. We will see that the quality of the treatment of neutrinos in merger simulations has greatly increased over the last decade, but also that many potentially important interactions remain difficult to take into account in simulations (pair annihilation, oscillations, inelastic scattering).

     
    more » « less