skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on September 8, 2026

Title: Fast Chip Transient Temperature Simulation via Machine Learning
With growing transistor densities, analyzing temperature in 2D and 3D integrated circuits (ICs) is becoming more complicated and critical. Finite-element solvers give accurate results, but a single transient run can take hours or even days. Compact thermal models (CTMs) shorten the temperature simulation running time using a numerical solver based on the duality between thermal and electric properties. However, CTM solvers often still take hours for small-scale chips because of iterative numerical solvers. Recent work using machine learning (ML) models creates a fast and reliable framework for predicting temperature. However, current ML models demand large input samples and hours of GPU training to reach acceptable accuracy. To overcome the challenges stated, we design an ML framework that couples with CTMs to accelerate steady-state and transient thermal analysis without large data inputs. Our framework combines principal-component analysis (PCA) with closed-form linear regression to predict the on-chip temperature directly. The linear regression weights are solved analytically, so training for a grid size of 512 × 512 finishes in under a minute with only 15–20 CTM samples. Experimental results show that our framework can achieve more than 33x and 49.6x speedup for steady-state and transient simulation of a chip with a 245.95mm^2 footprint, keeping the mean squared error below 0.1 deg C^2 .  more » « less
Award ID(s):
2131127
PAR ID:
10659113
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
IEEE
Date Published:
Page Range / eLocation ID:
1 to 8
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. With the scaling up of transistor densities, the thermal management of integrated circuits (IC) in 3D designs is becoming challenging. Conventional simulation methods, such as finite element methods, are accurate but computationally expensive. Compact thermal models (CTMs) provide an effective alternative and produce accurate thermal simulations using numerical solvers. Recent work has also designed machine learning (ML) models for predicting thermal maps. However, most of these ML models are limited by the need for a large dataset to train and a long training time for large chip designs. To overcome these challenges, we present a novel ML framework that integrates with CTMs to accelerate thermal simulations without the need for large datasets. We introduce a methodology that effectively combines the accuracy of CTMs with the efficiency of ML using a physically informed linear regression model based on the thermal conduction equation. We further introduce a window-based model reduction technique for scalability across a range of grid sizes and system architectures by reducing computational overhead without sacrificing accuracy. Unlike most of the existing ML methods for temperature prediction, our model adapts to changes in floorplans and architectures with minimum retraining. Experimental results show that our method achieves up to 70x speedup over the state-of-the-art thermal simulators and enables real-time, high-resolution thermal simulations on different IC designs from 2D to 3D. 
    more » « less
  2. As transistor densities increase, managing thermal challenges in 3D IC designs becomes more complex. Traditional methods like finite element methods and compact thermal models (CTMs) are computationally expensive, while existing machine learning (ML) models require large datasets and a long training time. To address these challenges with the ML models, we introduce a novel ML framework that integrates with CTMs to accelerate steady-state thermal simulations without needing large datasets. Our approach achieves up to 70× speedup over state-of-the-art simulators, enabling real-time, high-resolution thermal simulations for 2D and 3D IC designs. 
    more » « less
  3. Throughout the past decades, many different versions of the widely used first-order Cell-Transmission Model (CTM) have been proposed for optimal traffic control. Highway traffic management techniques such as Ramp Metering (RM) are typically designed based on an optimization problem with nonlinear constraints originating in the flow-density relation of the Fundamental Diagram (FD). Most of the extended CTM versions are based on the trapezoidal approximation of the flow-density relation of the Fundamental Diagram (FD) in an attempt to simplify the optimization problem. However, this relation is naturally nonlinear, and crude approximations can greatly impact the efficiency of the optimization solution. In this study, we propose a class of extended CTMs that are based on piecewise affine approximations of the flow-density relation such that (a) the integrated squared error with respect to the true relation is greatly reduced in comparison to the trapezoidal approximation, and (b) the optimization problem remains tractable for real-time application of ramp metering optimal controllers. A two-step identification method is used to approximate the FD with piecewise affine functions resulting in what we refer to as PWA-CTMs. The proposed models are evaluated by the performance of the optimal ramp metering controllers, e.g. using the widely used PI-ALINEA approach, in complex highway traffic networks. Simulation results show that the optimization problems based on the PWA-CTMs require less computation time compared to other CTM extensions while achieving higher accuracy of the flow and density evolution. Hence, the proposed PWA-CTMs constitute one of the best approximation approaches for first-order traffic flow models that can be used in more general and challenging modeling and control applications. 
    more » « less
  4. Abstract Microvascular materials containing internal microchannels are able to achieve multi-functionality by flowing different fluids through vasculature. Active cooling is one application to protect structural components and devices from thermal overload, which is critical to modern technology including electric vehicle battery packaging and solar panels on space probes. Creating thermally efficient vascular network designs requires state-of-the-art computational tools. Prior optimization schemes have only considered steady-state cooling, rendering a knowledge gap for time-varying heat transfer behavior. In this study, a transient topology optimization framework is presented to maximize the active-cooling performance and mitigate computational cost. Here, we optimize the channel layout so that coolant flowing within the vascular network can remove heat quickly and also provide a lower steady-state temperature. An objective function for this new transient formulation is proposed that minimizes the area beneath the average temperature versus time curve to simultaneously reduce the temperature and cooling time. The thermal response of the system is obtained through a transient Geometric Reduced Order Finite Element Model (GRO-FEM). The model is verified via a conjugate heat transfer simulation in commercial software and validated by an active-cooling experiment conducted on a 3D-printed microvascular metal. A transient sensitivity analysis is derived to provide the optimizer with analytical gradients of the objective function for further computational efficiency. Example problems are solved demonstrating the method’s ability to enhance cooling performance along with a comparison of transient versus steady-state optimization results. In this comparison, both the steady-state and transient frameworks delivered different designs with similar performance characteristics for the problems considered in this study. This latest computational framework provides a new thermal regulation toolbox for microvascular material designers. 
    more » « less
  5. Computational fluid dynamics (CFD) simulations are broadly used in many engineering and physics fields. CFD requires the solution of the Navier–Stokes (N-S) equations under complex flow and boundary conditions. However, applications of CFD simulations are computationally limited by the availability, speed, and parallelism of high-performance computing. To address this, machine learning techniques have been employed to create data-driven approximations for CFD to accelerate computational efficiency. Unfortunately, these methods predominantly depend on large labeled CFD datasets, which are costly to procure at the scale required for robust model development. In response, we introduce a weakly supervised approach that, through a multichannel input capturing boundary and geometric conditions, solves steady-state N-S equations. Our method achieves state-of-the-art results without relying on labeled simulation data, instead using a custom data-driven and physics-informed loss function and small-scale solutions to prime the model for solving the N-S equations. By training stacked models, we enhance resolution and predictability, yielding high-quality numerical solutions to N-S equations without hefty computational demands. Remarkably, our model, being highly adaptable, produces solutions on a 512 × 512 domain in a swift 7 ms, outpacing traditional CFD solvers by a factor of 1,000. This paves the way for real-time predictions on consumer hardware and Internet of Things devices, thereby boosting the scope, speed, and cost-efficiency of solving boundary-value fluid problems. 
    more » « less