skip to main content


Title: Dynamic Task Shaping for High Throughput Data Analysis Applications in High Energy Physics
Distributed data analysis frameworks are widely used for processing large datasets generated by instruments in scientific fields such as astronomy, genomics, and particle physics. Such frameworks partition petabyte-size datasets into chunks and execute many parallel tasks to search for common patterns, locate unusual signals, or compute aggregate properties. When well-configured, such frameworks make it easy to churn through large quantities of data on large clusters. However, configuring frameworks presents a challenge for end users, who must select a variety of parameters such as the blocking of the input data, the number of tasks, the resources allocated to each task, and the size of nodes on which they run. If poorly configured, the result may perform many orders of magnitude worse than optimal, or the application may even fail to make progress at all. Even if a good configuration is found through painstaking observations, the performance may change drastically when the input data or analysis kernel changes. This paper considers the problem of automatically configuring a data analysis application for high energy physics (TopEFT) built upon standard frameworks for physics analysis (Coffea) and distributed tasking (Work Queue). We observe the inherent variability within the application, demonstrate the problems of poor configuration, and then develop several techniques for automatically sizing tasks to meet goals of resource consumption, and overall application completion.  more » « less
Award ID(s):
1931348
NSF-PAR ID:
10356916
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
IPDPS International Parallel and Distributed Processing Symposium
Page Range / eLocation ID:
346 to 356
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Many scientific applications operate on large datasets that can be partitioned and operated on concurrently.The existing approaches for concurrent execution generally rely on statically partitioned data. This static partitioning can lock performance in a sub-optimal configuration, leading to higher execution time and an inability to respond to dynamic resources.We present the Continuously Divisible Job abstraction which allows statically defined applications to have their component tasks dynamically sized responding to system behaviour. The Continuously Divisible Job abstraction defines a simple interface that dictates how work can be recursively divided, executed,and merged. Implementing this abstraction allows scientific applications to leverage dynamic job coordinators for execution.We also propose the Virtual File abstraction which allows read-only subsets of large files to be treated as separate files.In exploring the Continuously Divisible Job abstraction, two applications were implemented using the Continuously Divisible Job interface: a bioinformatics application and a high-energy physics event analysis. These were tested using an abstract job interface and several job coordinators. Comparing these against a previous static partitioning implementation we show comparable or better performance without having to make static decisions or implement complex dynamic application handling. 
    more » « less
  2. This work investigates how different forms of input elicitation obtained from crowdsourcing can be utilized to improve the quality of inferred labels for image classification tasks, where an image must be labeled as either positive or negative depending on the presence/absence of a specified object. Five types of input elicitation methods are tested: binary classification (positive or negative); the ( x, y )-coordinate of the position participants believe a target object is located; level of confidence in binary response (on a scale from 0 to 100%); what participants believe the majority of the other participants' binary classification is; and participant's perceived difficulty level of the task (on a discrete scale). We design two crowdsourcing studies to test the performance of a variety of input elicitation methods and utilize data from over 300 participants. Various existing voting and machine learning (ML) methods are applied to make the best use of these inputs. In an effort to assess their performance on classification tasks of varying difficulty, a systematic synthetic image generation process is developed. Each generated image combines items from the MPEG-7 Core Experiment CE-Shape-1 Test Set into a single image using multiple parameters (e.g., density, transparency, etc.) and may or may not contain a target object. The difficulty of these images is validated by the performance of an automated image classification method. Experiment results suggest that more accurate results can be achieved with smaller training datasets when both the crowdsourced binary classification labels and the average of the self-reported confidence values in these labels are used as features for the ML classifiers. Moreover, when a relatively larger properly annotated dataset is available, in some cases augmenting these ML algorithms with the results (i.e., probability of outcome) from an automated classifier can achieve even higher performance than what can be obtained by using any one of the individual classifiers. Lastly, supplementary analysis of the collected data demonstrates that other performance metrics of interest, namely reduced false-negative rates, can be prioritized through special modifications of the proposed aggregation methods. 
    more » « less
  3. The size and amount of data captured from numerous sources has created a situation where the large quantity of data challenges our ability to understand the meaning within the data. This has motivated studies for mechanized data analysis and in particular for the clustering, or partitioning, of data into related groups. In fact, the size of the data has grown to the point where it is now often necessary to stream the data through the system for online and high speed analysis. This paper explores the application of approximate methods for the stream clustering of high-dimensional data (feature sizes contains 100+ measures). In particular, the algorithm that has been developed, called streamingRPHash, combines Random Projection with Locality Sensitive Hashing and a count-min sketch to implement a high-performance method for the parallel and distributed clustering of streaming data in a MapReduce framework. streamingRPHash is able to perform clustering at a rate much faster than traditional clustering algorithms such as K-Means. streamingRPHash provides clustering results that are only slightly less accurate than K-Means, but in runtimes that are nearly half that required by K-Means. The performance advantage for streamingRPHash becomes even more significant as the dimensionality of the input data stream increases. Furthermore, the experimental results show that streamingRPHash has a near linear speedup relative to the number of CPU cores. This speedup efficiency is possible because the approximate methods used in streamingRPHash allow independent and largely unsynchronized analyses to be performed on each streamed data vectors. 
    more » « less
  4. Does prompting a large language model (LLM) like GPT-3 with explanations improve in-context learning? We study this question on two NLP tasks that involve reasoning over text, namely question answering and natural language inference. We test the performance of four LLMs on three textual reasoning datasets using prompts that include explanations in multiple different styles. For these tasks, we find that including explanations in the prompts for OPT, GPT-3 (davinci), and InstructGPT (text-davinci-001) only yields small to moderate accuracy improvements over standard few-show learning. However, text-davinci-002 is able to benefit more substantially. We further show that explanations generated by the LLMs may not entail the models' predictions nor be factually grounded in the input, even on simple tasks with extractive explanations. However, these flawed explanations can still be useful as a way to verify LLMs' predictions post-hoc. Through analysis in our three settings, we show that explanations judged by humans to be good---logically consistent with the input and the prediction---more likely cooccur with accurate predictions. Following these observations, we train calibrators using automatically extracted scores that assess the reliability of explanations, allowing us to improve performance post-hoc across all of our datasets. 
    more » « less
  5. Summary

    One of the activities of the Pacific Rim Applications and Grid Middleware Assembly (PRAGMA) is fostering Virtual Biodiversity Expeditions by bringing domain scientists and cyber infrastructure specialists together as a team. Over the past few years, PRAGMA members have been collaborating on virtualizing the Lifemapper software. Virtualization and cloud computing have introduced great flexibility and efficiency into IT projects. Virtualization refers to the technologies that provide a layer of abstraction between server hardware system and software that runs on it. This abstraction enables a logical view of computing resources and allows multiple servers to run on the same hardware. With this project, we are virtualizing Lifemapper by enabling its installation and configuration on a virtual cluster. Virtualization provides application scalability, maximizes resources utilization, and creates a more efficient, agile, and automated infrastructure. However, there are downsides to the complexity inherent in these environments, including the need for special techniques to deploy cluster hosts, dependence on virtual environments, and challenging application installation, management, and configuration. In this study, we report on progress of the Lifemapper virtualization framework focused on a reproducible and highly configurable infrastructure capable of fast deployment.

    Lifemapper is a distributed software application developed by the Biodiversity Institute at The University of Kansas. Lifemapper creates and maintains a publicly accessible archive of species distribution maps calculated from public specimen data. Lifemapper software also provides a suite of tools for biodiversity researchers that calculate single and multispecies distribution predictions and macroecological analyses through application programming interfaces. Our goal is to create a viable solution that can be easily adopted and reused by scientists from multiple institutions or projects. This solution (1) allows fast deployment of ready‐made cluster images, (2) reproduces the complete Lifemapper processing pipeline on demand at multiple sites and in different hosting environments, and (3) enables scientists to perform Lifemapper‐facilitated data processing on restricted‐use data, very large datasets, or other unique data.

    A key contribution of this work is describing the practical experience in taking a complex, clustered, domain‐specific, data analysis, and simulation system and enabling its operation on a variety of system configurations. Uses of this portability range from whole cluster replication to teaching and experimentation on a single laptop. System virtualization is used to practically define and make portable the full application stack, including all of its complex set of supporting software and allows Lifemapper deployment in a variety of environments.

     
    more » « less