skip to main content


Title: Systematic benchmarking of HTTPS third party copy on 100Gbps links using XRootD
The High Luminosity Large Hadron Collider provides a data challenge. The amount of data recorded from the experiments and transported to hundreds of sites will see a thirty fold increase in annual data volume. A systematic approach to contrast the performance of different Third Party Copy (TPC) transfer protocols arises. Two contenders, XRootD-HTTPS and the GridFTP are evaluated in their performance for transferring files from one server to another over 100Gbps interfaces. The benchmarking is done by scheduling pods on the Pacific Research Platform Kubernetes cluster to ensure reproducible and repeatable results. This opens a future pathway for network testing of any TPC transfer protocol.  more » « less
Award ID(s):
2030508 1836650 1148698 1541349 1730158
NSF-PAR ID:
10296563
Author(s) / Creator(s):
; ; ; ; ;
Editor(s):
Biscarat, C.; Campana, S.; Hegner, B.; Roiser, S.; Rovelli, C.I.; Stewart, G.A.
Date Published:
Journal Name:
EPJ Web of Conferences
Volume:
251
ISSN:
2100-014X
Page Range / eLocation ID:
02001
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Nambiar, R ; Poess, M. (Ed.)
    Database systems with hybrid data management support, referred to as HTAP or HOAP architectures, are gaining popularity. These first appeared in the relational world, and the CH-benCHmark (CH) was proposed in 2011 to evaluate such relational systems. Today, one finds NoSQL database systems gaining adoption for new applications. In this paper we present CH2, a new benchmark – created with CH as its starting point – aimed at evaluating hybrid data platforms in the document data management world. Like CH, CH2 borrows from and extends both TPC-C and TPC-H. Differences from CH include a document-oriented schema, a data generation scheme that creates a TPC-H-like history, and a “do over” of the CH queries that is more in line with TPC-H. This paper details shortcomings that we uncovered in CH, the design of CH2, and preliminary results from running CH2 against Couchbase Server 7.0 (whose Query and Analytics services provide HOAP support for NoSQL data). The results provide insight into the performance isolation and horizontal scalability properties of Couchbase Server 7.0 as well as demonstrating the efficacy of CH2 for evaluating such platforms. 
    more » « less
  2. Doglioni, C. ; Kim, D. ; Stewart, G.A. ; Silvestris, L. ; Jackson, P. ; Kamleh, W. (Ed.)
    This paper is based on a talk given at Computing in High Energy Physics in Adelaide, South Australia, Australia in November 2019. It is partially intended to explain the context of DUNE Computing for computing specialists. The Deep Underground Neutrino Experiment (DUNE) collaboration consists of over 180 institutions from 33 countries. The experiment is in preparation now, with commissioning of the first 10kT fiducial volume Liquid Argon TPC expected over the period 2025-2028 and a long data taking run with 4 modules expected from 2029 and beyond. An active prototyping program is already in place with a short test-beam run with a 700T, 15,360 channel prototype of single-phase readout at the Neutrino Platform at CERN in late 2018 and tests of a similar sized dual-phase detector scheduled for mid-2019. The 2018 test-beam run was a valuable live test of our computing model. The detector produced raw data at rates of up to 2GB/s. These data were stored at full rate on tape at CERN and Fermilab and replicated at sites in the UK and Czech Republic. In total 1.2 PB of raw data from beam and cosmic triggers were produced and reconstructed during the six week testbeam run. Baseline predictions for the full DUNE detector data, starting in the late 2020’s are 30-60 PB of raw data per year. In contrast to traditional HEP computational problems, DUNE’s Liquid Argon TPC data consist of simple but very large (many GB) 2D data objects which share many characteristics with astrophysical images. This presents opportunities to use advances in machine learning and pattern recognition as a frontier user of High Performance Computing facilities capable of massively parallel processing. 
    more » « less
  3. Abstract—Cell-free massive MIMO (CF-mMIMO) is expected to provide reliable wireless services for a large number of user equipments (UEs) using access points (APs) distributed across a wide area. When the UEs are battery-powered, uplink energy efficiency (EE) becomes an important performance metric for CF-mMIMO systems. Therefore, if the “target” spectral efficiency (SE) is met, it is important to optimize the uplink EE when setting the transmit powers of the UEs. Also, such transmit power control (TPC) method must be tested on channel data from real-world measurements to prove its effectiveness. In this paper, we compare three different TPC algorithms using zero-forcing reception by applying them to 3.5 GHz channel measurement data featuring 30,000 possible AP locations and 8 UE locations in a 200m×200m area. We show that the max-min EE algorithm is highly effective in improving the uplink EE at a target SE, especially if the number of single-antenna APs is large, circuit power consumption is low, and the maximum allowed transmit power of the UEs is high. 
    more » « less
  4. null (Ed.)
    Resource disaggregation is a new architecture for data centers in which resources like memory and storage are decoupled from the CPU, managed independently, and connected through a high-speed network. Recent work has shown that although disaggregated data centers (DDCs) provide operational benefits, applications running on DDCs experience degraded performance due to extra network latency between the CPU and their working sets in main memory. DBMSs are an interesting case study for DDCs for two main reasons: (1) DBMSs normally process data-intensive workloads and require data movement between different resource components; and (2) disaggregation drastically changes the assumption that DBMSs can rely on their own internal resource management. We take the first step to thoroughly evaluate the query execution performance of production DBMSs in disaggregated data centers. We evaluate two popular open-source DBMSs (MonetDB and PostgreSQL) and test their performance with the TPC-H benchmark in a recently released operating system for resource disaggregation. We evaluate these DBMSs with various configurations and compare their performance with that of single-machine Linux with the same hardware resources. Our results confirm that significant performance degradation does occur, but, perhaps surprisingly, we also find settings in which the degradation is minor or where DDCs actually improve performance. 
    more » « less
  5. Abstract Aim

    Understanding and predicting the biological consequences of climate change requires considering the thermal sensitivity of organisms relative to environmental temperatures. One common approach involves ‘thermal safety margins’ (TSMs), which are generally estimated as the temperature differential between the highest temperature an organism can tolerate (critical thermal maximum, CTmax) and the mean or maximum environmental temperature it experiences. Yet, organisms face thermal stress and performance loss at body temperatures below their CTmax,and the steepness of that loss increases with the asymmetry of the thermal performance curve (TPC).

    Location

    Global.

    Time period

    2015–2019.

    Major taxa studied

    Ants, fish, insects, lizards and phytoplankton.

    Methods

    We examine variability in TPC asymmetry and the implications for thermal stress for 384 populations from 289 species across taxa and for metrics including ant and lizard locomotion, fish growth, and insect and phytoplankton fitness.

    Results

    We find that the thermal optimum (Topt, beyond which performance declines) is more labile than CTmax, inducing interspecific variation in asymmetry. Importantly, the degree of TPC asymmetry increases with Topt. Thus, even though populations with higher Topts in a hot environment might experience above‐optimal body temperatures less often than do populations with lower Topts, they nonetheless experience steeper declines in performance at high body temperatures. Estimates of the annual cumulative decline in performance for temperatures above Toptsuggest that TPC asymmetry alters the onset, rate and severity of performance decrement at high body temperatures.

    Main conclusions

    Species with the same TSMs can experience different thermal risk due to differences in TPC asymmetry. Metrics that incorporate additional aspects of TPC shape better capture the thermal risk of climate change than do TSMs.

     
    more » « less