skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: HOMP: Automated Distribution of Parallel Loops and Data in Highly Parallel Accelerator-Based Systems
Heterogeneous computing systems, e.g., those with accelerators than the host CPUs, offer the accelerated performance for a variety of workloads. However, most parallel programming models require platform dependent, time-consuming hand-tuning efforts for collectively using all the resources in a system to achieve efficient results. In this work, we explore the use of OpenMP parallel language extensions to empower users with the ability to design applications that automatically and simultaneously leverage CPUs and accelerators to further optimize use of available resources. We believe such automation will be key to ensuring codes adapt to increases in the number and diversity of accelerator resources for future computing systems. The proposed system combines language extensions to OpenMP, load-balancing algorithms and heuristics, and a runtime system for loop distribution across heterogeneous processing elements. We demonstrate the effectiveness of our automated approach to program on systems with multiple CPUs, GPUs, and MICs.  more » « less
Award ID(s):
1409946 1551182 1422961
PAR ID:
10050479
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Parallel and Distributed Processing Symposium (IPDPS), 2017 IEEE International
Page Range / eLocation ID:
788 to 798
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Badia, Rosa M; Mohror, Kathryn (Ed.)
    In contemporary high-performance computing architectures, the integration of GPU accelerators has become increasingly prevalent. To harness the full potential of these accelerators, developers often resort to vendor-specific kernel languages, such as CUDA. While this approach ensures optimal efficiency, it inherently compromises portability and engenders vendor dependency. Existing portable programming models, such as OpenMP, while promising, demand extensive code rewriting due to their foundamental difference from kernel languages. In this work, we introduce extensions to LLVM OpenMP, transforming it into a versatile and performance portable kernel language for GPU programming. These extensions allow for the seamless porting of programs from kernel languages to high-performance OpenMP GPU programs with minimal modifications. To evaluate our extension, we implemented a proof-of-concept prototype that contains a subset of extensions we proposed. We ported six established CUDA proxy and benchmark applications and evaluated their performance on both AMD and NVIDIA platforms. By comparing with native versions (HIP and CUDA), our results show that OpenMP, augmented with our extensions, can not only match but also in some cases exceed the performance of kernel languages, thereby offering performance portability with minimal effort from application developers. 
    more » « less
  2. High performance computing (HPC) system runs compute-intensive parallel applications requiring large number of nodes. An HPC system consists of heterogeneous computer architecture nodes, including CPUs, GPUs, field programmable gate arrays (FPGAs), etc. Power capping is a method to improve parallel application performance subject to variable power constraints. In this paper, we propose a parallel application power and performance prediction simulator. We present prediction model to predict application power and performance for unknown power-capping values considering heterogeneous computing architecture. We develop a job scheduling simulator based on parallel discrete-event simulation engine. The simulator includes a power and performance prediction model, as well as a resource allocation model. Based on real-life measurements and trace data, we show the applicability of our proposed prediction model and simulator. 
    more » « less
  3. The complexity of heterogeneous computing architectures, as well as the demand for productive and portable parallel application development, have driven the evolution of parallel programming models to become more comprehensive and complex than before. Enhancing the conventional compilation technologies and software infrastructure to be parallelism-aware has become one of the main goals of recent compiler development. In this work, we propose the design of unified parallel intermediate representation (UPIR) for multiple parallel programming models and for enabling unified compiler transformation for the models. UPIR specifies three commonly used parallelism patterns (SPMD, data and task parallelism), data attributes and explicit data movement and memory management, and synchronization operations used in parallel programming. We demonstrate UPIR via a prototype implementation in the ROSE compiler for unifying IR for both OpenMP and OpenACC and in both C/C++ and Fortran, for unifying the transformation that lowers both OpenMP and OpenACC code to LLVM runtime, and for exporting UPIR to LLVM MLIR dialect. The fully extended paper of this abstract can be found from https://arxiv.org/abs/2209.10643. 
    more » « less
  4. The ACM/IEEE CS 2013 report recommends fifteen hours of parallel & distributed computing (PDC) education for every undergraduate. This workshop illustrates the use of the Raspberry Pi as an inexpensive, multicore platform for teaching shared-memory parallel programming. The inexpensive and tactile nature of the Raspberry Pi enables each student to experience her own parallel multiprocessor through sight and touch. In this hands-on workshop, we will teach attendees how they can leverage the Raspberry Pi and the OpenMP library to teach shared-memory parallel concepts in their own classrooms. All CS educators who are interested in learning about the Raspberry Pi, shared memory parallelism, and OpenMP are encouraged to attend. In Part I of the workshop, each participant will connect to and learn about the Raspberry Pi's multicore capabilities. In Part II, each participant will engage in self-paced, hands-on exploration of basic parallel computing concepts using the OpenMP "patternlets" from CSinParallel.org. In Part III, participants will investigate more complex applications, such as numeric integration and drug design and study how these applications can be parallelized using OpenMP. We will conclude the workshop with a series of lightning talks discussing how the Raspberry Pi has been used to teach parallel computing concepts at different institutions. We will also present a summary of student perceptions of the Raspberry Pi. All materials from this workshop will be freely available from CSinParallel.org. Space is limited to 20 participants. A laptop is required. 
    more » « less
  5. Centered on modern C++ and the SYCL standard for heterogeneous programming, Data Parallel C++ (dpc++) and Intel's oneAPI software ecosystem aim to lower the barrier to entry for the use of accelerators like FPGAs in diverse applications. In this work, we consider the usage of FPGAs for scientific computing, in particular with a multigrid solver, MueLu. We report on early experiences implementing kernels of the solver in DPC++ for execution on Stratix 10 FPGAs, and we evaluate several algorithmic design and implementation choices. These choices not only impact performance, but also shed light on the capabilities and limitations of DPC++ and oneAPI. 
    more » « less