Interconnection links connecting broadband access providers with their peers, transit providers and major content providers, are a potential point of discriminatory treatment and impairment of user experience. However, adequate data to shed light on this situation is lacking, and different actors can put forward opportunistic interpretations of data to support their points of view. In this article, we introduce a topology-aware model of interconnection to elucidate our own beliefs about how to measure interconnection links of access providers and how policy- makers should interpret the results. We use six case studies that show how our conceptual model can guide a critical analysis of what is or should be measured and reported, and how to soundly interpret these measurements.
more »
« less
Inferring persistent interdomain congestion
There is significant interest in the technical and policy communities regarding the extent, scope, and consumer harm of persistent interdomain congestion. We provide empirical grounding for discussions of interdomain congestion by developing a system and method to measure congestion on thousands of interdomain links without direct access to them. We implement a system based on the Time Series Latency Probes (TSLP) technique that identifies links with evidence of recurring congestion suggestive of an under-provisioned link. We deploy our system at 86 vantage points worldwide and show that congestion inferred using our lightweight TSLP method correlates with other metrics of interconnection performance impairment. We use our method to study interdomain links of eight large U.S. broadband access providers from March 2016 to December 2017, and validate our inferences against ground-truth traffic statistics from two of the providers. For the period of time over which we gathered measurements, we did not find evidence of widespread endemic congestion on interdomain links between access ISPs and directly connected transit and content providers, although some such links exhibited recurring congestion patterns. We describe limitations, open challenges, and a path toward the use of this method for large-scale third-party monitoring of the Internet interconnection ecosystem.
more »
« less
- PAR ID:
- 10111923
- Date Published:
- Journal Name:
- Proceedings of the 2018 Conference of the ACM Special Interest Group on Data Communication
- Page Range / eLocation ID:
- 1 to 15
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Non-recurring traffic congestion is caused by temporary disruptions, such as accidents, sports games, adverse weather, etc. We use data related to real-time traffic speed, jam factors (a traffic congestion indicator), and events collected over a year from Nashville, TN to train a multi-layered deep neural network. The traffic dataset contains over 900 million data records. The network is thereafter used to classify the real-time data and identify anomalous operations. Compared with traditional approaches of using statistical or machine learning techniques, our model reaches an accuracy of 98.73 percent when identifying traffic congestion caused by football games. Our approach first encodes the traffic across a region as a scaled image. After that the image data from different timestamps is fused with event- and time-related data. Then a crossover operator is used as a data augmentation method to generate training datasets with more balanced classes. Finally, we use the receiver operating characteristic (ROC) analysis to tune the sensitivity of the classifier. We present the analysis of the training time and the inference time separately.more » « less
-
A key dimension of reproducibility in testbeds is stable performance that scales in regular and predictable ways in accordance with declarative specifications for virtual resources. We contend that reproducibility is crucial for elastic performance control in live experiments, in which testbed tenants (slices) provide services for real user traffic that varies over time. This paper gives an overview of ExoPlex, a framework for deploying network service providers (NSPs) as a basis for live inter-domain networking experiments on the ExoGENI testbed. As a motivating example, we show how to use ExoPlex to implement a virtual software-defined exchange (vSDX) as a tenant NSP. The vSDX implements security-managed interconnection of customer IP networks that peer with it via direct L2 links stitched dynamically into its slice. An elastic controller outside of the vSDX slice provisions network links and computing capacity for a scalable monitoring fabric within the tenant vSDX slice. The vSDX checks compliance of traffic flows with customer-specified interconnection policies, and blocks traffic from senders that trigger configured rules for intrusion detection in Bro security monitors. We present initial results showing the effect of resource provisioning on Bro performance within the vSDX.more » « less
-
Large Internet Service Providers (ISPs) often require that peers meet certain requirements to be eligible for free-settlement peering. The conventional wisdom is that these requirements are related to the perception of roughly equal value from the peering arrangement, but the academic literature has not yet established such a relationship. The focus of this paper is to relate the settlement-free peering requirements between two large ISPs and understand the degree to which the settlement-free peering requirements between them should apply to the peering between large ISPs and content providers. We analyze settlement-free peering requirements about the number and location of interconnection points (IXPs). Large ISPs often require interconnection at a minimum of 6 to 8 interconnection points. We find that the ISP’s traffic-sensitive cost is decreasing and convex with the number of interconnection points. We also observe that there may be little value in requiring interconnection at more than 8 IXPs. We then analyze the interconnection between a large content provider and an ISP. We show that it is rational for an ISP to agree to settlement-free peering if the content provider agrees to interconnect at a specified minimum number of interconnection points and to deliver a specified minimum proportion of traffic locally.more » « less
-
We present our latest development and experimental validation of carrier cooperative recovery for enhancing the resilience of optical packet transport networks. Experimental results prove that in case of resource crunch caused by, e.g., traffic congestion, failures, man-made/natural disasters, etc., swift and low-cost recovery can be achieved by exploiting the interconnection capability among carriers, which demonstrates a novel use case of multi-carrier interconnection technology.more » « less
An official website of the United States government

