Small integration time steps limit molecular dynamics (MD) simulations to millisecond time scales. Markov state models (MSMs) and equation-free approaches learn low-dimensional kinetic models from MD simulation data by performing configurational or dynamical coarse-graining of the state space. The learned kinetic models enable the efficient generation of dynamical trajectories over vastly longer time scales than are accessible by MD, but the discretization of configurational space and/or absence of a means to reconstruct molecular configurations precludes the generation of continuous all-atom molecular trajectories. We propose latent space simulators (LSS) to learn kinetic models for continuous all-atom simulation trajectories by training three deep learning networks to (i) learn the slow collective variables of the molecular system, (ii) propagate the system dynamics within this slow latent space, and (iii) generatively reconstruct molecular configurations. We demonstrate the approach in an application to Trp-cage miniprotein to produce novel ultra-long synthetic folding trajectories that accurately reproduce all-atom molecular structure, thermodynamics, and kinetics at six orders of magnitude lower cost than MD. The dramatically lower cost of trajectory generation enables greatly improved sampling and greatly reduced statistical uncertainties in estimated thermodynamic averages and kinetic rates.
more »
« less
This content will become publicly available on March 21, 2024
Building insightful, memory-enriched models to capture long-time biochemical processes from short-time simulations
The ability to predict and understand complex molecular motions occurring over diverse timescales ranging from picoseconds to seconds and even hours in biological systems remains one of the largest challenges to chemical theory. Markov state models (MSMs), which provide a memoryless description of the transitions between different states of a biochemical system, have provided numerous important physically transparent insights into biological function. However, constructing these models often necessitates performing extremely long molecular simulations to converge the rates. Here, we show that by incorporating memory via the time-convolutionless generalized master equation (TCL-GME) one can build a theoretically transparent and physically intuitive memory-enriched model of biochemical processes with up to a three order of magnitude reduction in the simulation data required while also providing a higher temporal resolution. We derive the conditions under which the TCL-GME provides a more efficient means to capture slow dynamics than MSMs and rigorously prove when the two provide equally valid and efficient descriptions of the slow configurational dynamics. We further introduce a simple averaging procedure that enables our TCL-GME approach to quickly converge and accurately predict long-time dynamics even when parameterized with noisy reference data arising from short trajectories. We illustrate the advantages of the TCL-GME using alanine dipeptide, the human argonaute complex, and FiP35 WW domain.
more »
« less
- Award ID(s):
- 2154291
- NSF-PAR ID:
- 10409438
- Date Published:
- Journal Name:
- Proceedings of the National Academy of Sciences
- Volume:
- 120
- Issue:
- 12
- ISSN:
- 0027-8424
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Lee, Jonghyun ; Darve, Eric F. ; Kitanidis, Peter K. ; Mahoney, Michael W. ; Karpatne, Anuj ; Farthing, Matthew W. ; Hesser, Tyler (Ed.)Modern design, control, and optimization often require multiple expensive simulations of highly nonlinear stiff models. These costs can be amortized by training a cheap surrogate of the full model, which can then be used repeatedly. Here we present a general data-driven method, the continuous time echo state network (CTESN), for generating surrogates of nonlinear ordinary differential equations with dynamics at widely separated timescales. We empirically demonstrate the ability to accelerate a physically motivated scalable model of a heating system by 98x while maintaining relative error of within 0.2 %. We showcase the ability for this surrogate to accurately handle highly stiff systems which have been shown to cause training failures with common surrogate methods such as Physics-Informed Neural Networks (PINNs), Long Short Term Memory (LSTM) networks, and discrete echo state networks (ESN). We show that our model captures fast transients as well as slow dynamics, while demonstrating that fixed time step machine learning techniques are unable to adequately capture the multi-rate behavior. Together this provides compelling evidence for the ability of CTESN surrogates to predict and accelerate highly stiff dynamical systems which are unable to be directly handled by previous scientific machine learning techniques.more » « less
-
Neural learning rules for generating flexible predictions and computing the successor representationMemories are an important part of how we think, understand the world around us, and plan out future actions. In the brain, memories are thought to be stored in a region called the hippocampus. When memories are formed, neurons store events that occur around the same time together. This might explain why often, in the brains of animals, the activity associated with retrieving memories is not just a snapshot of what happened at a specific moment-- it can also include information about what the animal might experience next. This can have a clear utility if animals use memories to predict what they might experience next and plan out future actions. Mathematically, this notion of predictiveness can be summarized by an algorithm known as the successor representation. This algorithm describes what the activity of neurons in the hippocampus looks like when retrieving memories and making predictions based on them. However, even though the successor representation can computationally reproduce the activity seen in the hippocampus when it is making predictions, it is unclear what biological mechanisms underpin this computation in the brain. Fang et al. approached this problem by trying to build a model that could generate the same activity patterns computed by the successor representation using only biological mechanisms known to exist in the hippocampus. First, they used computational methods to design a network of neurons that had the biological properties of neural networks in the hippocampus. They then used the network to simulate neural activity. The results show that the activity of the network they designed was able to exactly match the successor representation. Additionally, the data resulting from the simulated activity in the network fitted experimental observations of hippocampal activity in Tufted Titmice. One advantage of the network designed by Fang et al. is that it can generate predictions in flexible ways,. That is, it canmake both short and long-term predictions from what an individual is experiencing at the moment. This flexibility means that the network can be used to simulate how the hippocampus learns in a variety of cognitive tasks. Additionally, the network is robust to different conditions. Given that the brain has to be able to store memories in many different situations, this is a promising indication that this network may be a reasonable model of how the brain learns. The results of Fang et al. lay the groundwork for connecting biological mechanisms in the hippocampus at the cellular level to cognitive effects, an essential step to understanding the hippocampus, as well as its role in health and disease. For instance, their network may provide a concrete approach to studying how disruptions to the ways neurons make and break connections can impair memory formation. More generally, better models of the biological mechanisms involved in making computations in the hippocampus can help scientists better understand and test out theories about how memories are formed and stored in the brain.more » « less
-
Optimization of mixing in microfluidic devices is a popular application of computational fluid dynamics software packages, such as COMSOL Multiphysics, with an increasing number of studies being published on the topic. On one hand, the laminar nature of the flow and lack of turbulence in this type of devices can enable very accurate numerical modeling of the fluid motion and reactant/particle distribution, even in complex channel geometries. On the other hand, the same laminar nature of the flow, makes mixing, which is fundamental to the functionality of any microfluidic reactor or assay system, hard to achieve, as it forces reliance on the slow molecular diffusion, rather than on turbulence. This in turn forces designers of microfluidic systems to develop a broad set of strategies to enable mixing on the microscale, targeted to the specific applications of interest. In this context, numerical modeling can enable efficient exploration of a large set of parameters affecting mixing, such as geometrical characteristics and flow rates, to identify optimal designs. However, it has to be noted that even very performant mixing topologies, such as the use of groove-ridge surface features, require multiple mixing units. This in turn requires very high resolution meshing, in particular when looking for solutions for the convection-diffusion equation governing the reactant or chemical species distribution. For the typical length of microfluidic mixing channels, analyzed using finite element analysis, this becomes computationally challenging due to the large number of elements that need to be handled. In this work we describe a methodology using the COMSOL Computational Fluid Dynamics and Chemical Reaction Engineering modules, in which large geometries are split in subunits. The Navier-Stokes and convection-diffusion equations, are then solved in each subunit separately, with the solutions obtained being transferred between them to map the flow field and concentration through the entire geometry of the channel. As validation, the model is tested against data from mixers using periodic systems of groove-ridge features in order to engineer transversal mixing flows, showing a high degree of correlation with the experimental results. It is also shown that the methodology can be extended to long mixing channels that lack periodicity and in which each geometrical mixing subunit is distinct.more » « less
-
Abstract Atmospheric regime transitions are highly impactful as drivers of extreme weather events, but pose two formidable modeling challenges: predicting the next event (weather forecasting) and characterizing the statistics of events of a given severity (the risk climatology). Each event has a different duration and spatial structure, making it hard to define an objective “average event.” We argue here that transition path theory (TPT), a stochastic process framework, is an appropriate tool for the task. We demonstrate TPT’s capacities on a wave–mean flow model of sudden stratospheric warmings (SSWs) developed by Holton and Mass, which is idealized enough for transparent TPT analysis but complex enough to demonstrate computational scalability. Whereas a recent article (Finkel et al. 2021) studied near-term SSW predictability, the present article uses TPT to link predictability to long-term SSW frequency. This requires not only forecasting forward in time from an initial condition, but also backward in time to assess the probability of the initial conditions themselves. TPT enables one to condition the dynamics on the regime transition occurring, and thus visualize its physical drivers with a vector field called the reactive current . The reactive current shows that before an SSW, dissipation and stochastic forcing drive a slow decay of vortex strength at lower altitudes. The response of upper-level winds is late and sudden, occurring only after the transition is almost complete from a probabilistic point of view. This case study demonstrates that TPT quantities, visualized in a space of physically meaningful variables, can help one understand the dynamics of regime transitions.more » « less