We present a unified framework for the data-driven construction of stochastic reduced models with state-dependent memory for high-dimensional Hamiltonian systems. The method addresses two key challenges: (i) accurately modeling heterogeneous non-Markovian effects where the memory function depends on the coarse-grained (CG) variables beyond the standard homogeneous kernel, and (ii) efficiently exploring the phase space to sample both equilibrium and dynamical observables for reduced model construction. Specifically, we employ a consensus-based sampling method to establish a shared sampling strategy that enables simultaneous construction of the free energy function and collection of conditional two-point correlation functions used to learn the state-dependent memory. The reduced dynamics is formulated as an extended Markovian system, where a set of auxiliary variables, interpreted as non-Markovian features, is jointly learned to systematically approximate the memory function using only two-point statistics. The constructed model yields a generalized Langevin-type formulation with an invariant distribution consistent with the full dynamics. We demonstrate the effectiveness of the proposed framework on a two-dimensional CG model of an alanine dipeptide molecule. Numerical results on the transition dynamics between metastable states show that accurately capturing state-dependent memory is essential for predicting non-equilibrium kinetic properties, whereas the standard generalized Langevin model with a homogeneous kernel exhibits significant discrepancies.
more »
« less
This content will become publicly available on June 30, 2026
On the Generalization Ability of Coarse-Grained Molecular Dynamics Models for Nonequilibrium Processes
One essential goal of constructing coarse-grained molecular dynamics (CGMD) models is to accurately predict nonequilibrium processes beyond the atomistic scale. While a CG model can be constructed by projecting the full dynamics onto a set of resolved variables, the dynamics of the CG variables can recover the full dynamics only when the conditional distribution of the unresolved variables is close to the one associated with the particular projection operator. In particular, the model's applicability to various nonequilibrium processes is generally unwarranted due to the inconsistency in the conditional distribution. Here, we present a data-driven approach for constructing CGMD models that retain certain generalization ability for nonequilibrium processes. Unlike the conventional CG models based on preselected CG variables (e.g., the center of mass), the present CG model seeks a set of auxiliary CG variables similar to the time-lagged independent component analysis to maximize the velocity correlation. This effectively minimizes the entropy contribution of unresolved variables and ensures the distribution under a broad range of nonequilibrium conditions approaches the one under equilibrium. Numerical results of a polymer melt system demonstrate the significance of this broadly overlooked metric for the model's generalization ability, and the effectiveness of the present CG model for predicting the complex viscoelastic responses under various nonequilibrium flows.
more »
« less
- PAR ID:
- 10610691
- Publisher / Repository:
- SIAM
- Date Published:
- Journal Name:
- Multiscale Modeling & Simulation
- Volume:
- 23
- Issue:
- 2
- ISSN:
- 1540-3459
- Page Range / eLocation ID:
- 816 to 837
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
One important problem in constructing the reduced dynamics of molecular systems is the accurate modeling of the non-Markovian behavior arising from the dynamics of unresolved variables. The main complication emerges from the lack of scale separations, where the reduced dynamics generally exhibits pronounced memory and non-white noise terms. We propose a data-driven approach to learn the reduced model of multi-dimensional resolved variables that faithfully retains the non-Markovian dynamics. Different from the common approaches based on the direct construction of the memory function, the present approach seeks a set of non-Markovian features that encode the history of the resolved variables and establishes a joint learning of the extended Markovian dynamics in terms of both the resolved variables and these features. The training is based on matching the evolution of the correlation functions of the extended variables that can be directly obtained from the ones of the resolved variables. The constructed model essentially approximates the multi-dimensional generalized Langevin equation and ensures numerical stability without empirical treatment. We demonstrate the effectiveness of the method by constructing the reduced models of molecular systems in terms of both one-dimensional and four-dimensional resolved variables.more » « less
-
We introduce a machine-learning-based coarse-grained molecular dynamics (CGMD) model that faithfully retains the many-body nature of the inter-molecular dissipative interactions. Unlike the common empirical CG models, the present model is constructed based on the Mori-Zwanzig formalism and naturally inherits the heterogeneous state-dependent memory term rather than matching the mean-field metrics such as the velocity auto-correlation function. Numerical results show that preserving the many-body nature of the memory term is crucial for predicting the collective transport and diffusion processes, where empirical forms generally show limitations.more » « less
-
Modern generative models exhibit unprecedented capabilities to generate extremely realistic data. However, given the inherent compositionality of real world, reliable use of these models in practical applications mandates they exhibit the ability to compose their capabilities, generating and reasoning over entirely novel samples never seen in the training distribution. Prior work demonstrates recent vision diffusion models exhibit intriguing compositional generalization abilities, but also fail rather unpredictably. What are the reasons underlying this behavior? Which concepts does the model generally find difficult to compose to form novel data? To address these questions, we perform a controlled study of compositional generalization in conditional diffusion models in a synthetic setting, varying different attributes of the training data and measuring the model's ability to generate samples out-of-distribution. Our results show that: (i) the compositional structure of the data-generating process governs the order in which capabilities and an ability to compose them emerges; (ii) learning individual concepts impacts performance on compositional tasks, multiplicatively explaining sudden emergence; and (iii) learning and composing capabilities is difficult under correlations. We hope our study inspires further grounded research on understanding capabilities and compositionality in generative models from a data-centric perspective.more » « less
-
Coarse-grained molecular dynamics (CGMD) simulations address lengthscales and timescales that are critical to many chemical and material applications. Nevertheless, contemporary CGMD modeling is relatively bespoke and there are no black-box CGMD methodologies available that could play a comparable role in discovery applications that density functional theory plays for electronic structure. This gap might be filled by machine learning (ML)-based CGMD potentials that simplify model development, but these methods are still in their early stages and have yet to demonstrate a significant advantage over existing physics-based CGMD methods. Here, we explore the potential of Δ-learning models to leverage the advantages of these two approaches. This is implemented by using ML-based potentials to learn the difference between the target CGMD variable and the predictions of physics-based potentials. The Δ-models are benchmarked against the baseline models in reproducing on-target and off-target atomistic properties as a function of CG resolution, mapping operator, and system topology. The Δ-models outperform the reference ML-only CGMD models in nearly all scenarios. In several cases, the ML-only models manage to minimize training errors while still producing qualitatively incorrect dynamics, which is corrected by the Δ-models. Given their negligible added cost, Δ-models provide essentially free gains over their ML-only counterparts. Nevertheless, an unexpected finding is that neither the Δ-learning models nor the ML-only models significantly outperform the elementary pairwise models in reproducing atomistic properties. This fundamental failure is attributed to the relatively large irreducible force errors associated with coarse-graining that produces little benefit from using more complex potentials.more » « less
An official website of the United States government
