skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: FreeCompilerCamp.org: Training for OpenMP Compiler Development from Cloud
OpenMP is one of the most popular programming models to exploit node-level parallelism of supercomputers. Many researchers are interested in developing OpenMP compilers or extending existing standard for new capabilities. However, there is a lack of training resources for researchers who are involved in the compiler and language development around OpenMP, making learning curve in this area steep. In this paper, we introduce an ongoing effort, FreeCompilerCamp.org, a free and open online learning platform aimed to train researchers to quickly develop OpenMP compilers. The platform is built on top of Play-With-Docker, a docker playground for users to conduct experiments in an online terminal sandbox. It provides a live training website that is set up on cloud, so anyone with internet access and a web browser will be able to take the training. It also enables developers with relevant skills to contribute new tutorials. The entire training system is open-source and can be deployed on a private server, workstation or even laptop for personal use. We have created some initial tutorials to train users to learn how to extend the Clang/LLVM and ROSE compiler to support new OpenMP features. We welcome anyone to try out our system, give us feedback, contribute new training courses, or enhance the training platform to make it an effective learning resource for the HPC community.  more » « less
Award ID(s):
2015254
PAR ID:
10132735
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Journal of computational science education
Volume:
11
Issue:
1
ISSN:
2153-4136
Page Range / eLocation ID:
53-60
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The HPC industry is inexorably moving towards an era of extremely heterogeneous architectures, with more devices configured on any given HPC platform and potentially more kinds of devices, some of them highly specialized. Writing a separate code suitable for each target system for a given HPC application is not practical. The better solution is to use directive-based parallel programming models such as OpenMP. OpenMP provides a number of options for offloading a piece of code to devices like GPUs. To select the best option from such options during compilation, most modern compilers use analytical models to estimate the cost of executing the original code and the different offloading code variants. Building such an analytical model for compilers is a difficult task that necessitates a lot of effort on the part of a compiler engineer. Recently, machine learning techniques have been successfully applied to build cost models for a variety of compiler optimization problems. In this paper, we present COMPOFF, a cost model which uses the multi-layer perceptrons to statically estimates the Cost of OpenMP OFFloading. We used six different transformations on a parallel code of Wilson Dslash Operator to support GPU offloading, and we predicted their cost of execution on different GPUs using COMPOFF during compile time. Our results show that this model can predict offloading costs with a root mean squared error in prediction of less than 0.5 seconds. Our preliminary findings indicate that this work will make it much easier and faster for scientists and compiler developers to port legacy HPC applications that use OpenMP to new heterogeneous computing environments. 
    more » « less
  2. In computational physics, chemistry, and biology, the implementation of new techniques in shared and open-source software lowers barriers to entry and promotes rapid scientific progress. However, effectively training new software users presents several challenges. Common methods like direct knowledge transfer and in-person workshops are limited in reach and comprehensiveness. Furthermore, while the COVID-19 pandemic highlighted the benefits of online training, traditional online tutorials can quickly become outdated and may not cover all the software’s functionalities. To address these issues, here we introduce “PLUMED Tutorials,” a collaborative model for developing, sharing, and updating online tutorials. This initiative utilizes repository management and continuous integration to ensure compatibility with software updates. Moreover, the tutorials are interconnected to form a structured learning path and are enriched with automatic annotations to provide broader context. This paper illustrates the development, features, and advantages of PLUMED Tutorials, aiming to foster an open community for creating and sharing educational resources. 
    more » « less
  3. Computing landscape is evolving rapidly. Exascale computers have arrived, which can perform 10^18 mathematical operations per second. At the same time, quantum supremacy has been demonstrated, where quantum computers have outperformed these fastest supercomputers for certain problems. Meanwhile, artificial intelligence (AI) is transforming every aspect of science and engineering. A highly anticipated application of the emerging nexus of exascale computing, quantum computing and AI is computational design of new materials with desired functionalities, which has been the elusive goal of the federal materials genome initiative. The rapid change in computing landscape resulting from these developments has not been matched by pedagogical developments needed to train the next generation of materials engineering cyberworkforce. This gap in curricula across colleges and universities offers a unique opportunity to create educational tools, enabling a decentralized training of cyberworkforce. To achieve this, we have developed training modules for a new generation of quantum materials simulator, named AIQ-XMaS (AI and quantum-computing enabled exascale materials simulator), which integrates exascalable quantum, reactive and neural-network molecular dynamics simulations with unique AI and quantum-computing capabilities to study a wide range of materials and devices of high societal impact such as optoelectronics and health. As a singleentry access point to these training modules, we have also built a CyberMAGICS (cyber training on materials genome innovation for computational software) portal, which includes step-by-step instructions in Jupyter notebooks and associated tutorials, while providing online cloud service for those who do not have access to adequate computing platform. The modules are incorporated into our open-source AIQ-XMaS software suite as tutorial examples and are piloted in classroom and workshop settings to directly train many users at the University of Southern California (USC) and Howard University—one of the largest historically black colleges and universities (HBCUs), with a strong focus on underrepresented groups. In this paper, we summarize these educational developments, including findings from the first CyberMAGICS Workshop for Underrepresented Groups, along with an introduction to the AIQ-XMaS software suite. Our training modules also include a new generation of open programming languages for exascale computing (e.g., OpenMP target) and quantum computing (e.g., Qiskit) used in our scalable simulation and AI engines that underlie AIQ-XMaS. Our training modules essentially support unique dual-degree opportunities at USC in the emerging exa-quantum-AI era: Ph.D. in science or engineering, concurrently with MS in computer science specialized in high-performance computing and simulations, MS in quantum information science or MS in materials engineering with machine learning. The developed modular cyber-training pedagogy is applicable to broad engineering education at large. 
    more » « less
  4. The Einstein Toolkit is a complex software system for numerical general relativity, a science domain that includes colliding black holes, neutron stars, supernovae, etc. As might be expected for a framework of this size and age (parts of it are over 20 years old), there is a significant learning curve to building it, running it, writing new modules for it, etc. Over the years, the Einstein Toolkit maintainers have given a number of tutorials for new users. In recent years, we have created a tutorial server which allows us to streamline the teaching/learning process through the use of Jupyter notebooks and docker images. In this paper we describe the special considerations and adaptations required by the image and the notebook server that enable us to (1) easily make logins and manage accounts which streamlines both the classroom and the independent study experiences, (2) create a simplified but natural user experience for compiling and developing a complex C++ application, (3) scale to increasing class sizes. 
    more » « less
  5. In recent years, the pace of innovations in the fields of machine learning (ML) has accelerated, researchers in SysML have created algorithms and systems that parallelize ML training over multiple devices or computational nodes. As ML models become more structurally complex, many systems have struggled to provide all-round performance on a variety of models. Particularly, ML scale-up is usually underestimated in terms of the amount of knowledge and time required to map from an appropriate distribution strategy to the model. Applying parallel training systems to complex models adds nontrivial development overheads in addition to model prototyping, and often results in lower-than-expected performance. This tutorial identifies research and practical pain points in parallel ML training, and discusses latest development of algorithms and systems on addressing these challenges in both usability and performance. In particular, this tutorial presents a new perspective of unifying seemingly different distributed ML training strategies. Based on it, introduces new techniques and system architectures to simplify and automate ML parallelization. This tutorial is built upon the authors' years' of research and industry experience, comprehensive literature survey, and several latest tutorials and papers published by the authors and peer researchers. The tutorial consists of four parts. The first part will present a landscape of distributed ML training techniques and systems, and highlight the major difficulties faced by real users when writing distributed ML code with big model or big data. The second part dives deep to explain the mainstream training strategies, guided with real use case. By developing a new and unified formulation to represent the seemingly different data- and model- parallel strategies, we describe a set of techniques and algorithms to achieve ML auto-parallelization, and compiler system architectures for auto-generating and exercising parallelization strategies based on models and clusters. The third part of this tutorial exposes a hidden layer of practical pain points in distributed ML training: hyper-parameter tuning and resource allocation, and introduces techniques to improve these aspects. The fourth part is designed as a hands-on coding session, in which we will walk through the audiences on writing distributed training programs in Python, using the various distributed ML tools and interfaces provided by the Ray ecosystem. 
    more » « less