skip to main content


Title: DAPT: A package enabling distributed automated parameter testing
Modern agent-based models (ABM) and other simulation models require evaluation and testing of many different parameters. Managing that testing for large scale parameter sweeps (grid searches), as well as storing simulation data, requires multiple, potentially customizable steps that may vary across simulations. Furthermore, parameter testing, processing, and analysis are slowed if simulation and processing jobs cannot be shared across teammates or computational resources. While high-performance computing (HPC) has become increasingly available, models can often be tested faster with the use of multiple computers and HPC resources. To address these issues, we created the Distributed Automated Parameter Testing (DAPT) Python package. By hosting parameters in an online (and often free) “database”, multiple individuals can run parameter sets simultaneously in a distributed fashion, enabling ad hoc crowdsourcing of computational power. Combining this with a flexible, scriptable tool set, teams can evaluate models and assess their underlying hypotheses quickly. Here, we describe DAPT and provide an example demonstrating its use.  more » « less
Award ID(s):
1735095
NSF-PAR ID:
10290192
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Gigabyte
Volume:
2021
ISSN:
2709-4715
Page Range / eLocation ID:
1 to 10
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Obeid, Iyad ; Selesnick, Ivan ; Picone, Joseph (Ed.)
    The goal of this work was to design a low-cost computing facility that can support the development of an open source digital pathology corpus containing 1M images [1]. A single image from a clinical-grade digital pathology scanner can range in size from hundreds of megabytes to five gigabytes. A 1M image database requires over a petabyte (PB) of disk space. To do meaningful work in this problem space requires a significant allocation of computing resources. The improvements and expansions to our HPC (highperformance computing) cluster, known as Neuronix [2], required to support working with digital pathology fall into two broad categories: computation and storage. To handle the increased computational burden and increase job throughput, we are using Slurm [3] as our scheduler and resource manager. For storage, we have designed and implemented a multi-layer filesystem architecture to distribute a filesystem across multiple machines. These enhancements, which are entirely based on open source software, have extended the capabilities of our cluster and increased its cost-effectiveness. Slurm has numerous features that allow it to generalize to a number of different scenarios. Among the most notable is its support for GPU (graphics processing unit) scheduling. GPUs can offer a tremendous performance increase in machine learning applications [4] and Slurm’s built-in mechanisms for handling them was a key factor in making this choice. Slurm has a general resource (GRES) mechanism that can be used to configure and enable support for resources beyond the ones provided by the traditional HPC scheduler (e.g. memory, wall-clock time), and GPUs are among the GRES types that can be supported by Slurm [5]. In addition to being able to track resources, Slurm does strict enforcement of resource allocation. This becomes very important as the computational demands of the jobs increase, so that they have all the resources they need, and that they don’t take resources from other jobs. It is a common practice among GPU-enabled frameworks to query the CUDA runtime library/drivers and iterate over the list of GPUs, attempting to establish a context on all of them. Slurm is able to affect the hardware discovery process of these jobs, which enables a number of these jobs to run alongside each other, even if the GPUs are in exclusive-process mode. To store large quantities of digital pathology slides, we developed a robust, extensible distributed storage solution. We utilized a number of open source tools to create a single filesystem, which can be mounted by any machine on the network. At the lowest layer of abstraction are the hard drives, which were split into 4 60-disk chassis, using 8TB drives. To support these disks, we have two server units, each equipped with Intel Xeon CPUs and 128GB of RAM. At the filesystem level, we have implemented a multi-layer solution that: (1) connects the disks together into a single filesystem/mountpoint using the ZFS (Zettabyte File System) [6], and (2) connects filesystems on multiple machines together to form a single mountpoint using Gluster [7]. ZFS, initially developed by Sun Microsystems, provides disk-level awareness and a filesystem which takes advantage of that awareness to provide fault tolerance. At the filesystem level, ZFS protects against data corruption and the infamous RAID write-hole bug by implementing a journaling scheme (the ZFS intent log, or ZIL) and copy-on-write functionality. Each machine (1 controller + 2 disk chassis) has its own separate ZFS filesystem. Gluster, essentially a meta-filesystem, takes each of these, and provides the means to connect them together over the network and using distributed (similar to RAID 0 but without striping individual files), and mirrored (similar to RAID 1) configurations [8]. By implementing these improvements, it has been possible to expand the storage and computational power of the Neuronix cluster arbitrarily to support the most computationally-intensive endeavors by scaling horizontally. We have greatly improved the scalability of the cluster while maintaining its excellent price/performance ratio [1]. 
    more » « less
  2. null ; null ; null ; null ; null ; null (Ed.)
    The National Ecological Observatory Network (NEON) is a continental-scale observatory with sites across the US collecting standardized ecological observations that will operate for multiple decades. To maximize the utility of NEON data, we envision edge computing systems that gather, calibrate, aggregate, and ingest measurements in an integrated fashion. Edge systems will employ machine learning methods to cross-calibrate, gap-fill and provision data in near-real time to the NEON Data Portal and to High Performance Computing (HPC) systems, running ensembles of Earth system models (ESMs) that assimilate the data. For the first time gridded EC data products and response functions promise to offset pervasive observational biases through evaluating, benchmarking, optimizing parameters, and training new ma- chine learning parameterizations within ESMs all at the same model-grid scale. Leveraging open-source software for EC data analysis, we are al- ready building software infrastructure for integration of near-real time data streams into the International Land Model Benchmarking (ILAMB) package for use by the wider research community. We will present a perspective on the design and integration of end-to-end infrastructure for data acquisition, edge computing, HPC simulation, analysis, and validation, where Artificial Intelligence (AI) approaches are used throughout the distributed workflow to improve accuracy and computational performance. 
    more » « less
  3. As large-scale scientific simulations and big data analyses become more popular, it is increasingly more expensive to store huge amounts of raw simulation results to perform post-analysis. To minimize the expensive data I/O, “in-situ” analysis is a promising approach, where data analysis applications analyze the simulation generated data on the fly without storing it first. However, it is challenging to organize, transform, and transport data at scales between two semantically different ecosystems due to the distinct software and hardware difference. To tackle these challenges, we design and implement the X-Composer framework. X-Composer connects cross-ecosystem applications to form an “in-situ” scientific workflow, and provides a unified approach and recipe for supporting such hybrid in-situ workflows on distributed heterogeneous resources. X-Composer reorganizes simulation data as continuous data streams and feeds them seamlessly into the Cloud-based stream processing services to minimize I/O overheads. For evaluation, we use X-Composer to set up and execute a cross-ecosystem workflow, which consists of a parallel Computational Fluid Dynamics simulation running on HPC, and a distributed Dynamic Mode Decomposition analysis application running on Cloud. Our experimental results show that X-Composer can seamlessly couple HPC and Big Data jobs in their own native environments, achieve good scalability, and provide high-fidelity analytics for ongoing simulations in real-time. 
    more » « less
  4. null (Ed.)
    High Performance Computing (HPC) stands at the forefront of engineering innovation. With affordable and advanced HPC resources more readily accessible than ever before, computational simulation of complex physical phenomena becomes an increasingly attractive strategy to predict the physical behavior of diverse engineered systems. Furthermore, novel applications of HPC in engineering are highly interdisciplinary, requiring advanced skills in mathematical modeling, algorithm development as well as programming skills for parallel, distributed and concurrent architectures and environments. This and other possible reasons have created a shortage of qualified workforce to conduct the much-needed research and development in these areas. This paper describes our experience with mentoring a cohort of ten high achieving undergraduate students in Summer 2019 to conduct engineering HPC research for ten weeks in X University. Our mentoring activity was informed and motivated by an initial informal study with the goal to learn the roles and status of HPC in engineering research and what can be improved to make more effective use of it. Through a combination of email surveys, in-person interviews, and a manual analysis of faculty research profiles in X University, we learn several lessons. First, a large proportion of the engineering faculty conducts research that is highly mathematical and computational and driven by disciplinary sciences, where simulation and HPC are widely needed as solutions. Second, due to the lack of resources to provide the necessary training in software development to their students, the interviewed engineering groups are limited in their ability to fully leveraging HPC capability in their research. Therefore, novel pathways for training and educating engineering researchers in HPC software development must be explored in order to further advance the engineering research capability in HPC. With a multi-year support from NSF, our summer research mentoring activities were able to accommodate ten high-achieving undergraduate students recruited from across the USA and their faculty mentors on the theme of HPC applications in engineering research. We describe the processes of students recruitment and selection, training and engagement, research mentoring, and professional development for the students. Best practices and lessons learned are identified and summarized based on our own observations and the evaluation conducted by an independent evaluator. In particular, improvements are being planned so as to deliver a more wholistic and rigorous research experience for future cohorts. 
    more » « less
  5. SUMMARY

    Accurate synthetic seismic wavefields can now be computed in 3-D earth models using the spectral element method (SEM), which helps improve resolution in full waveform global tomography. However, computational costs are still a challenge. These costs can be reduced by implementing a source stacking method, in which multiple earthquake sources are simultaneously triggered in only one teleseismic SEM simulation. One drawback of this approach is the perceived loss of resolution at depth, in particular because high-amplitude fundamental mode surface waves dominate the summed waveforms, without the possibility of windowing and weighting as in conventional waveform tomography.

    This can be addressed by redefining the cost-function and computing the cross-correlation wavefield between pairs of stations before each inversion iteration. While the Green’s function between the two stations is not reconstructed as well as in the case of ambient noise tomography, where sources are distributed more uniformly around the globe, this is not a drawback, since the same processing is applied to the 3-D synthetics and to the data, and the source parameters are known to a good approximation. By doing so, we can separate time windows with large energy arrivals corresponding to fundamental mode surface waves. This opens the possibility of designing a weighting scheme to bring out the contribution of overtones and body waves. It also makes it possible to balance the contributions of frequently sampled paths versus rarely sampled ones, as in more conventional tomography.

    Here we present the results of proof of concept testing of such an approach for a synthetic 3-component long period waveform data set (periods longer than 60 s), computed for 273 globally distributed events in a simple toy 3-D radially anisotropic upper mantle model which contains shear wave anomalies at different scales. We compare the results of inversion of 10 000 s long stacked time-series, starting from a 1-D model, using source stacked waveforms and station-pair cross-correlations of these stacked waveforms in the definition of the cost function. We compute the gradient and the Hessian using normal mode perturbation theory, which avoids the problem of cross-talk encountered when forming the gradient using an adjoint approach. We perform inversions with and without realistic noise added and show that the model can be recovered equally well using one or the other cost function.

    The proposed approach is computationally very efficient. While application to more realistic synthetic data sets is beyond the scope of this paper, as well as to real data, since that requires additional steps to account for such issues as missing data, we illustrate how this methodology can help inform first order questions such as model resolution in the presence of noise, and trade-offs between different physical parameters (anisotropy, attenuation, crustal structure, etc.) that would be computationally very costly to address adequately, when using conventional full waveform tomography based on single-event wavefield computations.

     
    more » « less