skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: TRACER (TRACe route ExploRer): A tool to explore OSG/WLCG network route topologies
The experiments at the Large Hadron Collider (LHC) rely upon a complex distributed computing infrastructure (WLCG) consisting of hundreds of individual sites worldwide at universities and national laboratories, providing about half a billion computing job slots and an exabyte of storage interconnected through high speed networks. Wide Area Networking (WAN) is one of the three pillars (together with computational resources and storage) of LHC computing. More than 5 PB/day are transferred between WLCG sites. Monitoring is one of the crucial components of WAN and experiments operations. In the past years all experiments have invested significant effort to improve monitoring and integrate networking information with data management and workload management systems. All WLCG sites are equipped with perfSONAR servers to collect a wide range of network metrics. We will present the latest development to provide the 3D force directed graph visualization for data collected by perfSONAR. The visualization package allows site admins, network engineers, scientists and network researchers to better understand the topology of our Research and Education networks and it provides the ability to identify nonreliable or/and nonoptimal network paths, such as those with routing loops or rapidly changing routes.  more » « less
Award ID(s):
1836650
PAR ID:
10258177
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
International Journal of Modern Physics A
Volume:
36
Issue:
05
ISSN:
0217-751X
Page Range / eLocation ID:
2130005
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Doglioni, C.; Kim, D.; Stewart, G.A.; Silvestris, L.; Jackson, P.; Kamleh, W. (Ed.)
    High Energy Physics (HEP) experiments rely on the networks as one of the critical parts of their infrastructure both within the participating laboratories and sites as well as globally to interconnect the sites, data centres and experiments instrumentation. Network virtualisation and programmable networks are two key enablers that facilitate agile, fast and more economical network infrastructures as well as service development, deployment and provisioning. Adoption of these technologies by HEP sites and experiments will allow them to design more scalable and robust networks while decreasing the overall cost and improving the effectiveness of the resource utilization. The primary challenge we currently face is ensuring that WLCG and its constituent collaborations will have the networking capabilities required to most effectively exploit LHC data for the lifetime of the LHC. In this paper we provide a high level summary of the HEPiX NFV Working Group report that explored some of the novel network capabilities that could potentially be deployment in time for HL-LHC. 
    more » « less
  2. Doglioni, C.; Kim, D.; Stewart, G.A.; Silvestris, L.; Jackson, P.; Kamleh, W. (Ed.)
    WLCG relies on the network as a critical part of its infrastructure and therefore needs to guarantee effective network usage and prompt detection and resolution of any network issues including connection failures, congestion and traffic routing. The OSG Networking Area, in partnership with WLCG, is focused on being the primary source of networking information for its partners and constituents. It was established to ensure sites and experiments can better understand and fix networking issues, while providing an analytics platform that aggregates network monitoring data with higher level workload and data transfer services. This has been facilitated by the global network of the perfSONAR instances that have been commissioned and are operated in collaboration with WLCG Network Throughput Working Group. An additional important update is the inclusion of the newly funded NSF project SAND (Service Analytics and Network Diagnosis) which is focusing on network analytics. This paper describes the current state of the network measurement and analytics platform and summarises the activities taken by the working group and our collaborators. This includes the progress being made in providing higher level analytics, alerting and alarming from the rich set of network metrics we are gathering. 
    more » « less
  3. De_Vita, R; Espinal, X; Laycock, P; Shadura, O (Ed.)
    Predicting the performance of various infrastructure design options in complex federated infrastructures with computing sites distributed over a wide area network that support a plethora of users and workflows, such as the Worldwide LHC Computing Grid (WLCG), is not trivial. Due to the complexity and size of these infrastructures, it is not feasible to deploy experimental test-beds at large scales merely for the purpose of comparing and evaluating alternate designs. An alternative is to study the behaviours of these systems using simulation. This approach has been used successfully in the past to identify efficient and practical infrastructure designs for High Energy Physics (HEP). A prominent example is the Monarc simulation framework, which was used to study the initial structure of the WLCG. New simulation capabilities are needed to simulate large-scale heterogeneous computing systems with complex networks, data access and caching patterns. A modern tool to simulate HEP workloads that execute on distributed computing infrastructures based on the SimGrid and WRENCH simulation frameworks is outlined. Studies of its accuracy and scalability are presented using HEP as a case-study. Hypothetical adjustments to prevailing computing architectures in HEP are studied providing insights into the dynamics of a part of the WLCG and candidates for improvements. 
    more » « less
  4. De_Vita, R; Espinal, X; Laycock, P; Shadura, O (Ed.)
    The Large Hadron Collider (LHC) experiments distribute data by leveraging a diverse array of National Research and Education Networks (NRENs), where experiment data management systems treat networks as a “blackbox” resource. After the High Luminosity upgrade, the Compact Muon Solenoid (CMS) experiment alone will produce roughly 0.5 exabytes of data per year. NREN Networks are a critical part of the success of CMS and other LHC experiments. However, during data movement, NRENs are unaware of data priorities, importance, or need for quality of service, and this poses a challenge for operators to coordinate the movement of data and have predictable data flows across multi-domain networks. The overarching goal of SENSE (The Software-defined network for End-to-end Networked Science at Exascale) is to enable National Labs and universities to request and provision end-to-end intelligent network services for their application workflows leveraging SDN (Software-Defined Networking) capabilities. This work aims to allow LHC Experiments and Rucio, the data management software used by CMS Experiment, to allocate and prioritize certain data transfers over the wide area network. In this paper, we will present the current progress of the integration of SENSE, Multi-domain end-to-end SDN Orchestration with QoS (Quality of Service) capabilities, with Rucio, the data management software used by CMS Experiment. 
    more » « less
  5. With the emergence of IoT applications, 5G, and edge computing, network resource allocation has shifted toward the edge, bringing services closer to the end users. These applications often require communication with the core network for purposes that include cloud storage, compute offloading, 5G-and-Beyond transport communication between centralized unit (CU), distributed unit (DU) and core network, centralized network monitoring and management, etc. As the number of these services increases, efficient and reliable connectivity between the edge and core networks is of the essence. Wavelength Division Multiplexing (WDM) is a well-suited technology for transferring large amounts of data by simultaneously transmitting several wavelength-multiplexed data streams over each single fiber optics link. WDM is the technology of choice in mid-haul and long-haul transmission networks, including edge-to-core networks, to offer increased transport capacity. Optical networks are prone to failures of components such as network fiber links, sites, and transmission ports. A single network element failure alone can cause significant traffic loss due to the disruption of many active data flows. Thus, fault-tolerant and reliable network designs remain a priority. The architecture called “dual-hub and dual-spoke” is often used in metro area networks (MANs). A dual-hub, or in general a multi-hub network, consists of a set of designated destination nodes (hubs) in which the data traffic from all other nodes (the peripherals) should be directed to the hubs. Multiple hubs offer redundant connectivity to and from the core or wide area network (WAN) through geographical diversity. The routing of the connections (also known as lightpaths) between the peripheral node and the hubs has to be carefully computed to maximize path diversity across the edge-to-core network. This means that whenever possible the established redundant lightpaths must not contain a common Shared Risk Link Group (SRLG). An algorithm is proposed to compute the most reliable set of SRLG disjoint shortest paths from any peripheral to all hubs. The proposed algorithm can also be used to evaluate the overall edge-to-core network reliability quantified through a newly introduced figure of merit. 
    more » « less