skip to main content


Title: Exploring the necessary complexity of interatomic potentials.
The application of machine learning models and algorithms towards describing atomic interactions has been a major area of interest in materials simulations in recent years, as machine learning interatomic potentials (MLIPs) are seen as being more flexible and accurate than their classical potential counterparts. This increase in accuracy of MLIPs over classical potentials has come at the cost of significantly increased complexity, leading to higher computational costs and lower physical interpretability and spurring research into improving the speeds and interpretability of MLIPs. As an alternative, in this work we leverage “machine learning” fitting databases and advanced optimization algorithms to fit a class of spline-based classical potentials, showing that they can be systematically improved in order to achieve accuracies comparable to those of low-complexity MLIPs. These results demonstrate that high model complexities may not be strictly necessary in order to achieve near-DFT accuracy in interatomic potentials and suggest an alternative route towards sampling the high accuracy, low complexity region of model space by starting with forms that promote simpler and more interpretable inter- atomic potentials.  more » « less
Award ID(s):
1922758
NSF-PAR ID:
10291010
Author(s) / Creator(s):
Date Published:
Journal Name:
Computational materials science
Volume:
200
Issue:
2021
ISSN:
1879-0801
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Interatomic potentials derived with Machine Learning algorithms such as Deep-Neural Networks (DNNs), achieve the accuracy of high-fidelity quantum mechanical (QM) methods in areas traditionally dominated by empirical force fields and allow performing massive simulations. Most DNN potentials were parametrized for neutral molecules or closed-shell ions due to architectural limitations. In this work, we propose an improved machine learning framework for simulating open-shell anions and cations. We introduce the AIMNet-NSE (Neural Spin Equilibration) architecture, which can predict molecular energies for an arbitrary combination of molecular charge and spin multiplicity with errors of about 2–3 kcal/mol and spin-charges with error errors ~0.01e for small and medium-sized organic molecules, compared to the reference QM simulations. The AIMNet-NSE model allows to fully bypass QM calculations and derive the ionization potential, electron affinity, and conceptual Density Functional Theory quantities like electronegativity, hardness, and condensed Fukui functions. We show that these descriptors, along with learned atomic representations, could be used to model chemical reactivity through an example of regioselectivity in electrophilic aromatic substitution reactions.

     
    more » « less
  2. Catalyzed by enormous success in the industrial sector, many research programs have been exploring data-driven, machine learning approaches. Performance can be poor when the model is extrapolated to new regions of chemical space, e.g., new bonding types, new many-body interactions. Another important limitation is the spatial locality assumption in model architecture, and this limitation cannot be overcome with larger or more diverse datasets. The outlined challenges are primarily associated with the lack of electronic structure information in surrogate models such as interatomic potentials. Given the fast development of machine learning and computational chemistry methods, we expect some limitations of surrogate models to be addressed in the near future; nevertheless spatial locality assumption will likely remain a limiting factor for their transferability. Here, we suggest focusing on an equally important effort—design of physics-informed models that leverage the domain knowledge and employ machine learning only as a corrective tool. In the context of material science, we will focus on semi-empirical quantum mechanics, using machine learning to predict corrections to the reduced-order Hamiltonian model parameters. The resulting models are broadly applicable, retain the speed of semiempirical chemistry, and frequently achieve accuracy on par with much more expensive ab initio calculations. These early results indicate that future work, in which machine learning and quantum chemistry methods are developed jointly, may provide the best of all worlds for chemistry applications that demand both high accuracy and high numerical efficiency.

     
    more » « less
  3. Abstract

    Developing an accurate interatomic potential model is a prerequisite for achieving reliable results from classical molecular dynamics (CMD) simulations; however, most of the potentials are biased as specific simulation purposes or conditions are considered in the parameterization. For developing an unbiased potential, a finite‐temperature dynamics machine learning (FTD‐ML) approach is proposed, and its processes and feasibility are demonstrated using the Buckingham potential model and aluminum (Al) as an example. Compared with conventional machine learning approaches, FTD‐ML exhibits three distinguished features: 1) FTD‐ML intrinsically incorporates more extensive configurational and conditional space for enhancing the transferability of developed potentials; 2) FTD‐ML employs various properties calculated directly from CMD, for ML model training and prediction validation against experimental data instead of first‐principles data; 3) FTD‐ML is much more computationally cost effective than first‐principles simulations, especially when the system size increases over 103atoms as employed in this research for ensuring reliable training data. The Al Buckingham potential developed by the FTD‐ML approach exhibits good performance for general simulation purposes. Thus, the FTD‐ML approach is expected to contribute to a fast development of interatomic potential model suitable for various simulation purposes and conditions, without limitation of model type, while maintaining experimental‐level accuracy.

     
    more » « less
  4. Abstract

    This work presents Neural Equivariant Interatomic Potentials (NequIP), an E(3)-equivariant neural network approach for learning interatomic potentials from ab-initio calculations for molecular dynamics simulations. While most contemporary symmetry-aware models use invariant convolutions and only act on scalars, NequIP employs E(3)-equivariant convolutions for interactions of geometric tensors, resulting in a more information-rich and faithful representation of atomic environments. The method achieves state-of-the-art accuracy on a challenging and diverse set of molecules and materials while exhibiting remarkable data efficiency. NequIP outperforms existing models with up to three orders of magnitude fewer training data, challenging the widely held belief that deep neural networks require massive training sets. The high data efficiency of the method allows for the construction of accurate potentials using high-order quantum chemical level of theory as reference and enables high-fidelity molecular dynamics simulations over long time scales.

     
    more » « less
  5. Machine learning potentials (MLPs) are poised to combine the accuracy of ab initio predictions with the computational efficiency of classical molecular dynamics (MD) simulation. While great progress has been made over the last two decades in developing MLPs, there is still much to be done to evaluate their model transferability and facilitate their development. In this work, we construct two deep potential (DP) models for liquid water near graphene surfaces, Model S and Model F, with the latter having more training data. A concurrent learning algorithm (DP-GEN) is adopted to explore the configurational space beyond the scope of conventional ab initio MD simulation. By examining the performance of Model S, we find that an accurate prediction of atomic force does not imply an accurate prediction of system energy. The deviation from the relative atomic force alone is insufficient to assess the accuracy of the DP models. Based on the performance of Model F, we propose that the relative magnitude of the model deviation and the corresponding root-mean-square error of the original test dataset, including energy and atomic force, can serve as an indicator for evaluating the accuracy of the model prediction for a given structure, which is particularly applicable for large systems where density functional theory calculations are infeasible. In addition to the prediction accuracy of the model described above, we also briefly discuss simulation stability and its relationship to the former. Both are important aspects in assessing the transferability of the MLP model.

     
    more » « less