skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, January 16 until 2:00 AM ET on Friday, January 17 due to maintenance. We apologize for the inconvenience.


Search for: All records

Award ID contains: 1724853

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Summary

    This work aims at designing and implementing a system able to profile and help manage the set of Internet eXchange Points (IXPs) in an Internet region. As part of the Internet Society's strategy to help monitor and understand the evolution of IXPs in a particular region, a route‐collector data analyzer tool was developed before being deployed and tested in AfriNIC. In fact, traffic localization efforts in the African peering ecosystem would be more sustained, and their efficacy assessed if they were supported by a platform, which evaluates and reports in real time about their impact on the Internet. We, thus, built the “African” Route‐collectors Data Analyzer (ARDA), an open source web platform for analyzing publicly available routing information collected since 2005, by local route‐collectors. ARDA evaluates predefined metrics that picture the status of the interconnection at local, national, and regional levels. It shows that a small proportion of AfriNIC ASes (roughly 17%) are peering in the region. Through them, 58% of all African networks are visible at one IXP or more. These have been static from April to September 2017, and even February 2018, underlining the need for increased efforts to improve local interconnectivity. We show how ARDA can help detect the impact of policies on the growth of local IXPs or continually provide the community with up‐to‐date empirical data on the evolution of the IXP substrate. Given its features, this tool will be a helpful compass for stakeholders in the quest for better traffic localization and new interconnection opportunities in the targeted region.

     
    more » « less
  2. Mutually Agreed Norms on Routing Security (MANRS) is an industry-led initiative to improve Internet routing security by encouraging participating networks to implement a series of mandatory or recommended actions. MANRS members must register their IP prefixes in a trusted routing database and use such information to prevent propagation of invalid routing information. MANRS membership has increased significantly in recent years, but the impact of the MANRS initiative on the overall Internet routing security remains unclear. In this paper, we provide the first independent look into the MANRS ecosystem by using publicly available data to analyze the routing behavior of participant networks. We quantify MANRS participants' level of conformance with the stated requirements, and compare the behavior of MANRS and non-MANRS networks. While not all MANRS members fully comply with all required actions, we find that they are more likely to implement routing security practices described in MANRS actions. We assess the relevance of the MANRS effort in securing the overall routing ecosystem. We found that as of May 2022, over 83% of MANRS networks were conformant to the route filtering requirement by dropping BGP messages with invalid information according to authoritative records, and over 95% were conformant to the routing information facilitation requirement, registering their resources in authoritative databases. 
    more » « less
  3. ABSTRACT The goal of this article is to offer framing for conversations about the role of measurement in informing public policy about the Internet. We review different stakeholders’ approaches to measurements and associated challenges, including the activities of U.S. government agencies. We show how taxonomies of existing harms can facilitate the search for clarity along the fraught path from identifying to measuring harms. Looking forward, we identify barriers to advancing our empirical grounding of Internet infrastructure to inform policy, societal challenges that create pressure to overcome these barriers, and steps that could facilitate measurement to support policymaking. 
    more » « less
  4. The Internet has become a critical component of modern civilization requiring scientific exploration akin to endeavors to understand the land, sea, air, and space environments. Understanding the baseline statistical distributions of traffic are essential to the scientific understanding of the Internet. Correlating data from different Internet observatories and outposts can be a useful tool for gaining insights into these distributions. This work compares observed sources from the largest Internet telescope (the CAIDA darknet telescope) with those from a commercial outpost (the GreyNoise honeyfarm). Neither of these locations actively emit Internet traffic and provide distinct observations of unsolicited Internet traffic (primarily botnets and scanners). Newly developed GraphBLAS hyperspace matrices and D4M associative array technologies enable the efficient analysis of these data on significant scales. The CAIDA sources are well approximated by a Zipf-Mandelbrot distribution. Over a 6-month period 70% of the brightest (highest frequency) sources in the CAIDA telescope are consistently detected by coeval observations in the GreyNoise honeyfarm. This overlap drops as the sources dim (reduce frequency) and as the time difference between the observations grows. The probability of seeing a CAIDA source is proportional to the logarithm of the brightness. The temporal correlations are well described by a modified Cauchy distribution. These observations are consistent with a correlated high frequency beam of sources that drifts on a time scale of a month. 
    more » « less
  5. We investigate a novel approach to the use of jitter to infer network congestion using data collected by probes in access networks. We discovered a set of features in jitter and jitter dispersion —a jitter-derived time series we define in this paper—time series that are characteristic of periods of congestion. We leverage these concepts to create a jitter-based congestion inference framework that we call Jitterbug. We apply Jitterbug’s capabilities to a wide range of traffic scenarios and discover that Jitterbug can correctly identify both recurrent and one-off congestion events. We validate Jitterbug inferences against state-of-the-art autocorrelation-based inferences of recurrent congestion. We find that the two approaches have strong congruity in their inferences, but Jitterbug holds promise for detecting one-off as well as recurrent congestion. We identify several future directions for this research including leveraging ML/AI techniques to optimize performance and accuracy of this approach in operational settings. 
    more » « less
  6. Web-based speed tests are popular among end-users for measuring their network performance. Thousands of measurement servers have been deployed in diverse geographical and network locations to serve users worldwide. However, most speed tests have opaque methodologies, which makes it difficult for researchers to interpret their highly aggregated test results, let alone leverage them for various studies. In this paper, we propose WebTestKit, a unified and configurable framework for facilitating automatic test execution and cross-layer analysis of test results for five major web-based speed test platforms. Capturing only packet headers of traffic traces, WebTestKit performs in-depth analysis by carefully extracting HTTP and timing information from test runs. Our testbed experiments showed WebTestKit is lightweight and accurate in interpreting encrypted measurement traffic. We applied WebTestKit to compare the use of HTTP requests across speed tests and investigate the root causes for impeding the accuracy of latency measurements, which play a vital role in test server selection and throughput estimation. 
    more » « less