skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Machine Learning for Thermal Transport Prediction in Nanoporous Materials: Progress, Challenges, and Opportunities
Predicting the thermal properties of nanoporous materials is a major challenge that affects their applications in efficient thermal insulation and energy storage. This narrative review discusses the application of machine learning models in nanoporous materials, including covalent organic frameworks, metal–organic frameworks, aerogels, and zeolites. It discusses model advancements, with a focus on predictive accuracy and computational efficiency. This includes models such as convolutional neural networks, graph neural networks, and physics-informed neural networks. This study also addresses the limitations of these data-driven models, including data availability, challenges in maintaining physical consistency, and difficulties in generalizing across various material families. Additionally, it covers emerging approaches such as multimodal and transfer learning, which are explored for their potential to reduce computational costs. Moreover, the benefits of interpretable machine learning methods for understanding underlying physical mechanisms are introduced and highlighted. This review provides comprehensive and practical guidelines for researchers using machine learning approaches in the study and design of nanoporous materials.  more » « less
Award ID(s):
2442297
PAR ID:
10663168
Author(s) / Creator(s):
;
Publisher / Repository:
MDPI
Date Published:
Journal Name:
Nanomaterials
Volume:
15
Issue:
21
ISSN:
2079-4991
Page Range / eLocation ID:
1660
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In this review, we examine how machine learning (ML) can build on molecular simulation (MS) algorithms to advance tremendously our ability to predict the thermodynamic properties of a wide range of systems. The key thermodynamic properties that govern the evolution of a system and the outcome of a process include the entropy, the Helmholtz and the Gibbs free energy. However, their determination through advanced molecular simulation algorithms has remained challenging, since such methods are extremely computationally intensive. Combining MS with ML provides a solution that overcomes such challenges and, in turn, accelerates discovery through the rapid prediction of free energies. After presenting a brief overview of combined MS–ML protocols, we review how these approaches allow for the accurate prediction of these thermodynamic functions and, more broadly, of free energy landscapes for molecular and biological systems. We then discuss extensions of this approach to systems relevant to energy and environmental applications, i.e. gas storage and separation in nanoporous materials, such as metal–organic frameworks and covalent organic frameworks. We finally show in the last part of the review how ML models can suggest new ways to explore free energy landscapes, identify novel pathways and provide new insight into assembly processes. 
    more » « less
  2. Chu, Wilson (Ed.)
    Two-dimensional materials (e.g., graphene and transition metal dichalcogenides) and their heterostructures have enormous applications in electrochemical energy storage systems such as batteries. A comprehensive and solid understanding of these materials’ thermal transport and mechanism is essential for practical device design. Several advanced experimental techniques have been developed to measure the intrinsic thermal conductivity of materials. However, experiments have challenges in providing improved control and characterization of complex structures, especially for low-dimensional materials. Theoretical and simulation tools, such as first-principles calculations, Boltzmann transport equations, molecular dynamics simulations, lattice dynamics simulation, and nonequilibrium Green’s function, provide reliable predictions of thermal conductivity and physical insights to understand the underlying thermal transport mechanism in materials. However, doing these calculations requires high computational resources. The development of new materials synthesis technology and fast-growing demand for rapid and accurate prediction of physical properties requires novel computational approaches. The machine learning method provides a promising solution to address such needs. This review details the recent development in atomistic/molecular studies and machine learning of thermal transport in two-dimensional materials. The paper also addresses the latest significant experimental advances. However, designing the best two-dimensional materials-based heterostructures is like a multivariate optimization problem. For example, a particular heterostructure may be suitable for thermal transport but can have lower mechanical strength/stability. For bilayer and multilayer structures, the interlayer distance may influence the thermal transport properties and interlayer strength. Therefore, the last part of this review addresses the future research direction in two-dimensional materials-based heterostructure design for thermal transport in energy storage systems. 
    more » « less
  3. Understanding thermal stress evolution in metal additive manufacturing (AM) is crucial for producing high-quality components. Recent advancements in machine learning (ML) have shown great potential for modeling complex multiphysics problems in metal AM. While physics-based simulations face the challenge of high computational costs, conventional data-driven ML models require large, labeled training datasets to achieve accurate predictions. Unfortunately, generating large datasets for ML model training through time-consuming experiments or high-fidelity simulations is highly expensive in metal AM. To address these challenges, this study introduces a physics-informed neural network (PINN) framework that incorporates governing physical laws into deep neural networks (NNs) to predict temperature and thermal stress evolution during the laser metal deposition (LMD) process. The study also discusses enhanced accuracy and efficiency of the PINN model when supplemented with small simulation data. Furthermore, it highlights the PINN transferability, enabling fast predictions with a set of new process parameters using a pre-trained PINN model as an online soft sensor, significantly reducing computation time compared to physics-based numerical models while maintaining accuracy. 
    more » « less
  4. Abstract Advancements in computing power have recently made it possible to utilize machine learning and deep learning to push scientific computing forward in a range of disciplines, such as fluid mechanics, solid mechanics, materials science, etc. The incorporation of neural networks is particularly crucial in this hybridization process. Due to their intrinsic architecture, conventional neural networks cannot be successfully trained and scoped when data are sparse, which is the case in many scientific and engineering domains. Nonetheless, neural networks provide a solid foundation to respect physics-driven or knowledge-based constraints during training. Generally speaking, there are three distinct neural network frameworks to enforce the underlying physics: (i) physics-guided neural networks (PgNNs), (ii) physics-informed neural networks (PiNNs), and (iii) physics-encoded neural networks (PeNNs). These methods provide distinct advantages for accelerating the numerical modeling of complex multiscale multiphysics phenomena. In addition, the recent developments in neural operators (NOs) add another dimension to these new simulation paradigms, especially when the real-time prediction of complex multiphysics systems is required. All these models also come with their own unique drawbacks and limitations that call for further fundamental research. This study aims to present a review of the four neural network frameworks (i.e., PgNNs, PiNNs, PeNNs, and NOs) used in scientific computing research. The state-of-the-art architectures and their applications are reviewed, limitations are discussed, and future research opportunities are presented in terms of improving algorithms, considering causalities, expanding applications, and coupling scientific and deep learning solvers. 
    more » « less
  5. Operational ocean forecasting systems (OOFSs) are complex engines that must execute ocean models with high performance to provide timely products and datasets. Significant computational resources are then needed to run high-fidelity models, and, historically, the technological evolution of microprocessors has constrained data-parallel scientific computation. Today, graphics processing units (GPUs) offer a rapidly growing and valuable source of computing power rivaling the traditional CPU-based machines: the exploitation of thousands of threads can significantly accelerate the execution of many models, ranging from traditional HPC workloads of finite difference, finite volume, and finite element modelling through to the training of deep neural networks used in machine learning (ML) and artificial intelligence. Despite the advantages, GPU usage in ocean forecasting is still limited due to the legacy of CPU-based model implementations and the intrinsic complexity of porting core models to GPU architectures. This review explores the potential use of GPU in ocean forecasting and how the computational characteristics of ocean models can influence the suitability of GPU architectures for the execution of the overall value chain: it discusses the current approaches to code (and performance) portability, from CPU to GPU, including tools that perform code transformation, easing the adaptation of Fortran code for GPU execution (like PSyclone), the direct use of OpenACC directives (like ICON-O), the adoption of specific frameworks that facilitate the management of parallel execution across different architectures, and the use of new programming languages and paradigms. 
    more » « less