Significant obstacles exist in scientific domains including genetics, climate modeling, and astronomy due to the management, preprocess, and training on complicated data for deep learning. Even while several large-scale solutions offer distributed execution environments, open-source alternatives that integrate scalable runtime tools, deep learning and data frameworks on high-performance computing platforms remain crucial for accessibility and flexibility. In this paper, we introduce Deep Radical-Cylon(RC), a heterogeneous runtime system that combines data engineering, deep learning frameworks, and workflow engines across several HPC environments, including cloud and supercomputing infrastructures. Deep RC supports heterogeneous systems with accelerators, allows the usage of communication libraries like MPI, GLOO and NCCL across multi-node setups, and facilitates parallel and distributed deep learning pipelines by utilizing Radical Pilot as a task execution framework. By attaining an end-to-end pipeline including preprocessing, model training, and postprocessing with 11 neural forecasting models (PyTorch) and hydrology models (TensorFlow) under identical resource conditions, the system reduces 3.28 and 75.9 seconds, respectively. The design of Deep RC guarantees the smooth integration of scalable data frameworks, such as Cylon, with deep learning processes, exhibiting strong performance on cloud platforms and scientific HPC systems. By offering a flexible, high-performance solution for resource-intensive applications, this method closes the gap between data preprocessing, model training, and postprocessing.
more »
« less
Parla: a Python orchestration system for heterogeneous architectures
Python's ease of use and rich collection of numeric libraries make it an excellent choice for rapidly developing scientific applications. However, composing these libraries to take advantage of complex heterogeneous nodes is still difficult. To simplify writing multi-device code, we created Parla, a heterogeneous task-based programming framework that fully supports Python's scientific programming stack. Parla's API is based on Python decorators and allows users to wrap code in Parla tasks for parallel execution. Parla arrays enable automatic movement of data between devices. The Parla runtime handles resource-aware mapping, scheduling, and execution of tasks. Compared to other Python tasking systems, Parla is unique in its parallelization of tasks within a single process, its GPU context and resource-aware runtime, and its design around gradual adoption to provide easy migration of and integration into existing Python applications. We show that Parla can achieve performance competitive with hand-optimized code while improving ease of development.
more »
« less
- Award ID(s):
- 2006943
- PAR ID:
- 10439105
- Date Published:
- Journal Name:
- SC '22: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Battery-free and intermittently powered devices offer long lifetimes and enable deployment in new applications and environments. Unfortunately, developing sophisticated inference-capable applications is still challenging due to the lack of platform support for more advanced (32-bit) microprocessors and specialized accelerators---which can execute data-intensive machine learning tasks, but add complexity across the stack when dealing with intermittent power. We present Protean to bridge the platform gap for inference-capable battery-free sensors. Designed for runtime scalability, meeting the dynamic range of energy harvesters with matching heterogeneous processing elements like neural network accelerators. We develop a modular "plug-and-play" hardware platform, SuperSensor, with a reconfigurable energy storage circuit that powers a 32-bit ARM-based microcontroller with a convolutional neural network accelerator. An adaptive task-based runtime system, Chameleon, provides intermittency-proof execution of machine learning tasks across heterogeneous processing elements. The runtime automatically scales and dispatches these tasks based on incoming energy, current state, and programmer annotations. A code generator, Metamorph, automates conversion of ML models to intermittent safe execution across heterogeneous compute elements. We evaluate Protean with audio and image workloads and demonstrate up to 666x improvement in inference energy efficiency by enabling usage of modern computational elements within intermittent computing. Further, Protean provides up to 166% higher throughput compared to non-adaptive baselines.more » « less
-
Novice programmers often struggle with code understanding and debugging. Live Programming environments visualize the runtime values of a program each time it is modified to provide immediate feedback, which help with tracing the program execution. This paper presents the use of a Live Programming tool in a CS1 course to better understand the impact of Live Programming on novices’ learning metrics and their perceptions of the tool. We conducted a within-subjects study at a large public university in a CS1 course in Python (N=237) where students completed tasks in a lab setting, in some cases with a Live Programming environment, and in some cases without. Through post-lab surveys and open-ended feedback, we measured how well students understood the material and how students perceived the programming environment. To understand the impact of Live Programming, we compared the collected data for students who used Live Programming with the data for students who did not. We found that while learning outcomes were the same regardless of whether Live Programming was used or not, students who used the Live Programming tool completed some code tracing tasks faster. Furthermore, students liked the Live Programming environment more, and rated it as more helpful for their learning.more » « less
-
Modern mobile users commonly use multiple heterogeneous mobile devices, including smartphones, tablets, and wearables. Enabling these devices to seamlessly share their computational, network, and sensing resources has great potential benefit. Sharing resources across collocated mobile devices creates mobile device clouds (MDCs), commonly used to optimize application performance and to enable novel applications. However, enabling heterogeneous mobile devices to share their resources presents a number of difficulties, including the need to coordinate and steer the execution of devices with dissimilar network interfaces, application programming models, and system architectures. In this paper, we describe a solution that systematically empowers heterogeneous mobile devices to seamlessly, reliably, and efficiently share their resources. We present a programming model and runtime support for heterogeneous mobile device-to-device resource sharing. Our solution comprises a declarative domain-specific language for device-to-device cooperation, supported by a powerful runtime infrastructure. we evaluated our solution by conducting a controlled user study and running performance/energy efficiency benchmarks. The evaluation results indicate that our solution can become a practical tool for enhancing the capabilities of modern mobile applications by leveraging the resources of nearby mobile devices.more » « less
-
Despite advancements in the areas of parallel and distributed computing, the complexity of programming on High Performance Computing (HPC) resources has deterred many domain experts, especially in the areas of machine learning and artificial intelligence (AI), from utilizing performance benefits of such systems. Researchers and scientists favor high-productivity languages to avoid the inconvenience of programming in low-level languages and costs of acquiring the necessary skills required for programming at this level. In recent years, Python, with the support of linear algebra libraries like NumPy, has gained popularity despite facing limitations which prevent this code from distributed runs. Here we present a solution which maintains both high level programming abstractions as well as parallel and distributed efficiency. Phylanx, is an asynchronous array processing toolkit which transforms Python and NumPy operations into code which can be executed in parallel on HPC resources by mapping Python and NumPy functions and variables into a dependency tree executed by HPX, a general purpose, parallel, task-based runtime system written in C++. Phylanx additionally provides introspection and visualization capabilities for debugging and performance analysis. We have tested the foundations of our approach by comparing our implementation of widely used machine learning algorithms to accepted NumPy standards.more » « less
An official website of the United States government

