AWP-ODC is a 4th-order finite difference code used for linear wave propagation, Iwan-type nonlinear dynamic rupture and wave propagation, and Strain Green Tensor simulation2. We have ported and verified the linear and topography version of AWP-ODC, with discontinuous mesh as well as topography, to HIP so that it can also run on AMD GPUs. The topography code achieved a 99.6% parallel efficiency on 4,096 nodes on Frontier, a Leadership Computing Facility at Oak Ridge National Laboratory. We have also implemented CUDA-aware features and on-the-fly GDR compression in the linear version of the ported HIP code. These enhancements significantly improve data transfer efficiency between GPUs, reducing communication overhead and boosting overall performance. We have also extended CUDA-aware features to the topography version and are actively working on incorporating GDR compression into this version as well. We see 154% benefits over IMPI in MVAPICH2-GDR with CUDA-aware support and on-the-fly compression for linear AWP-ODC on Lonestar-6 A100 nodes. Furthermore, we have successfully integrated a checkpointing feature into the nonlinear IWAN version of AWP-ODC, prepared for future extreme-scale simulation during Texascale Days of Frontera at TACC.
more »
« less
Progress of porting AWP-ODC to next generation HPC architectures and a 4-Hz Iwan-type nonlinear dynamic simulation of the ShakeOut scenario on TACC Frontera
AWP-ODC is a 4th-order finite difference code used by the SCEC community for linear wave propagation, Iwan-type nonlinear dynamic rupture and wave propagation, and Strain Green Tensor simulation. We have ported and verified the CUDA-version of AWP-ODC-SGT, a reciprocal version used in the SCEC CyberShake project, to HIP so that it can also run on AMD GPUs. This code achieved sustained 32.6 Petaflop/s performance and 95.6% parallel efficiency at full scale on Frontier, a Leadership Computing Facility at Oak Ridge National Laboratory. The readiness of this community software on AMD Radeon Instinct GPUs and EPYC CPUs allows SCEC to take advantage of exascale systems to produce more realistic ground motions and accurate seismic hazard products.
We have also deployed AWP-ODC to Azure to leverage the array of tools and services that Azure provides for tightly coupled HPC simulation on commercial cloud. We collaborated with Internet 2/Azure Accelerator supporting team, as part of Microsoft Internet2/Azure Accelerator for Research Fall 2022 Program, with Azure credits awarded through Cloudbank, an NSF-funded initiative. We demonstrate the AWP performance with a benchmark of ground motion simulation on various GPU based cloud instances, and a comparison of the cloud solution to on-premises bare-metal systems.
AWP-ODC currently achieves excellent speedup and efficiency on CPU and GPU architectures. The Iwan-type dynamic rupture and wave propagation solver faces significant challenges, however, due to the increased computational workload with the number of yield surfaces chosen. Compared to linear solution, the Iwan model adds 10x-30x more computational time plus 5x-13x more memory consumption that require substantial code changes to obtain excellent performance. Supported by NSF’s Characteristic Science Applications (CSA) program for the Leadership-Class Computing Facility (LCCF) at Texas Advanced Computing Center (TACC), we are porting and improving the performance of this nonlinear AWP-ODC software, preparing for the next generation NSF LCCF system called Horizon, to be installed at TACC. During Texascale days on the current TACC’s Frontera, we carried out an Iwan-type nonlinear dynamic rupture and wave propagation simulation of a Mw7.8 scenario earthquake on the southern San Andreas fault. This simulation modeled 83 seconds of rupture with a grid spacing of 25 m to resolve frequencies up to 4 Hz with a minimum shear-wave velocity of 500 m/s.
more »
« less
- Award ID(s):
- 2311833
- PAR ID:
- 10538005
- Publisher / Repository:
- SCEC Publications
- Date Published:
- Format(s):
- Medium: X
- Location:
- Palm Springs
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The Gordon Bell winning AWP-ODC application has a long history of boosted performance with MVAPICH on both CPU and GPU-based architectures. This talk will highlight a recent compression support implemented by the MVAPICH team, and its benefits to the large-scale earthquake simulation on the leadership class computing systems. The presentation will conclude with a discussion of the opportunities and technical challenges associated with the development of earthquake simulation software for Exascale computing.more » « less
-
The Gordon Bell-winning AWP-ODC application continues to push the boundaries of earthquake simulation by leveraging the enhanced performance of MVAPICH on both CPU and GPU based architectures. This presentation highlights the recent improvements to the code and its application to broadband deterministic 3D wave propagation simulations of earthquake ground motions, incorporating high-resolution surface topography and detailed underground structures. The results of these simulations provide critical insights into the potential impacts of major earthquakes, contributing to more effective disaster preparedness and mitigation strategies. Additionally, the presentation will address the scientific and technical challenges encountered during the process and discuss the implications for future large-scale seismic studies on Exascale computing systems.more » « less
-
Sadayappan, Ponnuswamy ; Chamberlain, Bradford L. ; Juckeland, Guido ; Ltaief, Hatem (Ed.)As we approach the Exascale era, it is important to verify that the existing frameworks and tools will still work at that scale. Moreover, public Cloud computing has been emerging as a viable solution for both prototyping and urgent computing. Using the elasticity of the Cloud, we have thus put in place a pre-exascale HTCondor setup for running a scientific simulation in the Cloud, with the chosen application being IceCube's photon propagation simulation. I.e. this was not a purely demonstration run, but it was also used to produce valuable and much needed scientific results for the IceCube collaboration. In order to reach the desired scale, we aggregated GPU resources across 8 GPU models from many geographic regions across Amazon Web Services, Microsoft Azure, and the Google Cloud Platform. Using this setup, we reached a peak of over 51k GPUs corresponding to almost 380 PFLOP32s, for a total integrated compute of about 100k GPU hours. In this paper we provide the description of the setup, the problems that were discovered and overcome, as well as a short description of the actual science output of the exercise.more » « less
-
Due to the recent announcement of the Frontier supercomputer, many scientific application developers are working to make their applications compatible with AMD (CPU-GPU) architectures, which means moving away from the traditional CPU and NVIDIA-GPU systems. Due to the current limitations of profiling tools for AMD GPUs, this shift leaves a void in how to measure application performance on AMD GPUs. In this article, we design an instruction roofline model for AMD GPUs using AMD’s ROCProfiler and a benchmarking tool, BabelStream (the HIP implementation), as a way to measure an application’s performance in instructions and memory transactions on new AMD hardware. Specifically, we create instruction roofline models for a case study scientific application, PIConGPU, an open source particle-in-cell simulations application used for plasma and laser-plasma physics on the NVIDIA V100, AMD Radeon Instinct MI60, and AMD Instinct MI100 GPUs. When looking at the performance of multiple kernels of interest in PIConGPU we find that although the AMD MI100 GPU achieves a similar, or better, execution time compared to the NVIDIA V100 GPU, profiling tool differences make comparing performance of these two architectures hard. When looking at execution time, GIPS, and instruction intensity, the AMD MI60 achieves the worst performance out of the three GPUs used in this work.more » « less