skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Learning molecular dynamics: predicting the dynamics of glasses by a machine learning simulator
A graph-based machine learning model is built to predict atom dynamics from their static structure, which, in turn, unveils the predictive power of static structure in dynamical evolution of disordered phases.  more » « less
Award ID(s):
1928538 1922167 1944510
PAR ID:
10477479
Author(s) / Creator(s):
; ; ; ; ; ; ;
Publisher / Repository:
Royal Society of Chemistry
Date Published:
Journal Name:
Materials Horizons
Volume:
10
Issue:
9
ISSN:
2051-6347
Page Range / eLocation ID:
3416 to 3428
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Population structure has been known to substantially affect evolutionary dynamics. Networks that promote the spreading of fitter mutants are called amplifiers of selection, and those that suppress the spreading of fitter mutants are called suppressors of selection. Research in the past two decades has found various families of amplifiers while suppressors still remain somewhat elusive. It has also been discovered that most networks are amplifiers of selection under the birth-death updating combined with uniform initialization, which is a standard condition assumed widely in the literature. In the present study, we extend the birth-death processes to temporal (i.e., time-varying) networks. For the sake of tractability, we restrict ourselves to switching temporal networks, in which the network structure deterministically alternates between two static networks at constant time intervals or stochastically in a Markovian manner. We show that, in a majority of cases, switching networks are less amplifying than both of the two static networks constituting the switching networks. Furthermore, most small switching networks, i.e., networks on six nodes or less, are suppressors, which contrasts to the case of static networks. 
    more » « less
  2. Self-assembly of amphiphilic small molecules in water leads to nanostructures with customizable structure–property relationships arising from their tunable chemistries. Characterization of these assemblies is generally limited to their static structures – e.g. their geometries and dimensions – but the implementation of tools that provide a deeper understanding of molecular motions has recently emerged. Here, we summarize recent reports showcasing dynamics characterization tools and their application to small molecule assemblies, and we go on to highlight supramolecular systems whose properties are substantially affected by their conformational, exchange, and water dynamics. This review illustrates the importance of considering dynamics in rational amphiphile design. 
    more » « less
  3. Most of the literature on learning in games has focused on the restrictive setting where the underlying repeated game does not change over time. Much less is known about the convergence of no-regret learning algorithms in dynamic multiagent settings. In this paper, we characterize the convergence of optimistic gradient descent (OGD) in time-varying games. Our framework yields sharp convergence bounds for the equilibrium gap of OGD in zero-sum games parameterized on natural variation measures of the sequence of games, subsuming known results for static games. Furthermore, we establish improved second-order variation bounds under strong convexity-concavity, as long as each game is repeated multiple times. Our results also extend to time-varying general-sum multi-player games via a bilinear formulation of correlated equilibria, which has novel implications for meta-learning and for obtaining refined variation-dependent regret bounds, addressing questions left open in prior papers. Finally, we leverage our framework to also provide new insights on dynamic regret guarantees in static games. 1 
    more » « less
  4. null (Ed.)
    Granular-microstructured rods show strong dependence of grain-scale interactions in their mechanical behavior, and therefore, their proper description requires theories beyond the classical theory of continuum mechanics. Recently, the authors have derived a micromorphic continuum theory of degree n based upon the granular micromechanics approach (GMA). Here, the GMA is further specialized for a one-dimensional material with granular microstructure that can be described as a micromorphic medium of degree 1. To this end, the constitutive relationships, governing equations of motion and variationally consistent boundary conditions are derived. Furthermore, the static and dynamic length scales are linked to the second-gradient stiffness and micro-scale mass density distribution, respectively. The behavior of a one-dimensional granular structure for different boundary conditions is studied in both static and dynamic problems. The effects of material constants and the size effects on the response of the material are also investigated through parametric studies. In the static problem, the size-dependency of the system is observed in the width of the emergent boundary layers for certain imposed boundary conditions. In the dynamic problem, microstructural effects are always present and are manifested as deviations in the natural frequencies of the system from their classical counterparts. 
    more » « less
  5. Abstract Representations of the world environment play a crucial role in artificial intelligence. It is often inefficient to conduct reasoning and inference directly in the space of raw sensory representations, such as pixel values of images. Representation learning allows us to automatically discover suitable representations from raw sensory data. For example, given raw sensory data, a deep neural network learns nonlinear representations at its hidden layers, which are subsequently used for classification (or regression) at its output layer. This happens implicitly during training through minimizing a supervised or unsupervised loss. In this letter, we study the dynamics of such implicit nonlinear representation learning. We identify a pair of a new assumption and a novel condition, called the on-model structure assumption and the data architecture alignment condition. Under the on-model structure assumption, the data architecture alignment condition is shown to be sufficient for the global convergence and necessary for global optimality. Moreover, our theory explains how and when increasing network size does and does not improve the training behaviors in the practical regime. Our results provide practical guidance for designing a model structure; for example, the on-model structure assumption can be used as a justification for using a particular model structure instead of others. As an application, we then derive a new training framework, which satisfies the data architecture alignment condition without assuming it by automatically modifying any given training algorithm dependent on data and architecture. Given a standard training algorithm, the framework running its modified version is empirically shown to maintain competitive (practical) test performances while providing global convergence guarantees for deep residual neural networks with convolutions, skip connections, and batch normalization with standard benchmark data sets, including MNIST, CIFAR-10, CIFAR-100, Semeion, KMNIST, and SVHN. 
    more » « less