skip to main content


Title: Optimizing Gradual SDN Upgrades in ISP Networks
Nowadays, there is a fast-paced shift from legacy telecommunication systems to novel software-defined network (SDN) architectures that can support on-the-fly network reconfiguration, therefore, empowering advanced traffic engineering mechanisms. Despite this momentum, migration to SDN cannot be realized at once especially in high-end networks of Internet service providers (ISPs). It is expected that ISPs will gradually upgrade their networks to SDN over a period that spans several years. In this paper, we study the SDN upgrading problem in an ISP network: which nodes to upgrade and when we consider a general model that captures different migration costs and network topologies, and two plausible ISP objectives: 1) the maximization of the traffic that traverses at least one SDN node, and 2) the maximization of the number of dynamically selectable routing paths enabled by SDN nodes. We leverage the theory of submodular and supermodular functions to devise algorithms with provable approximation ratios for each objective. Using realworld network topologies and traffic matrices, we evaluate the performance of our algorithms and show up to 54% gains over state-of-the-art methods. Moreover, we describe the interplay between the two objectives; maximizing one may cause a factor of 2 loss to the other. We also study the dual upgrading problem, i.e., minimizing the upgrading cost for the ISP while ensuring specific performance goals. Our analysis shows that our proposed algorithm can achieve up to 2.5 times lower cost to ensure performance goals over state-of-the-art methods.  more » « less
Award ID(s):
1815676
NSF-PAR ID:
10121139
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE/ACM transactions on networking
Volume:
27
Issue:
1
ISSN:
1558-2566
Page Range / eLocation ID:
288-301
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Peering is an interconnection arrangement between two networks for the purpose of exchanging traffic between these networks and their customers. Two networks will agree to settlement-free peering if this arrangement is superior for both parties compared to alternative arrangements including paid peering or transit. The conventional wisdom is that two networks agree to settlement-free peering if they receive an approximately equal value from the arrangement. Historically, settlement-free peering was only common amongst tier-1 networks, and these networks commonly require peering at a minimum specified number of interconnection points and only when the traffic ratio is within specified bounds. However, the academic literature does not explain how these requirements relate to the value to each network. More recently, settlement-free peering and paid peering have become common between ISPs and CDNs. In this paper, we construct a network cost model to understand the rationality of common requirements on the number of interconnection points and traffic ratio. We also wish to understand if it is rational to apply these requirements to interconnection between an ISP and a CDN. We construct a model of ISP traffic-sensitive network costs. We consider an ISP that offers service across the US. We parameterize the model using statistics about the population and locations of people in the contiguous US. We consider peering at the locations of the largest interconnection points in the US. We model traffic-sensitive network costs in the ISP’s backbone network, middle-mile networks, and access networks. These costs are thus functions of routing policies, distances, and traffic volumes. To qualify for settlement-free peering, large ISPs commonly require peering at a minimum of 4 to 8 mutually agreeable interconnection points. The academic literature provides little insight into this requirement or how it is related to cost. We show that the traffic-sensitive network cost decreases as the number of interconnection points increases, but with decreasing returns. The requirement to peer at 4 to 8 interconnection points is thus rational, and requiring interconnection at more than 8 points is of little value. Another common requirement is that the ratio of downstream to upstream traffic not exceed 2:1. This is commonly understood to relate to approximately equal value, but the academic literature does not explain why. We show that when downstream traffic exceeds upstream traffic, an ISP gains less from settlement-free peering, and that when the traffic ratio exceeds 2:1 an ISP is likely to perceive insufficient value. Finally, we turn to interconnection between an ISP and a CDN. Large ISPs often assert that CDNs should meet the same requirements on the number of interconnection points and traffic ratio to qualify for settlement-free peering. We show that if the CDN delivers traffic to the ISP locally, then a requirement to interconnect at a minimum number of interconnection points is rational, but a limit on the traffic ratio is not rational. We also show that if the CDN delivers traffic using hot potato routing, the ISP is unlikely to perceive sufficient value to offer settlement-free peering. 
    more » « less
  2. Disagreements over peering fees have risen to the level of potential government regulation. ISPs assert that content providers should pay them based on the volume of downstream traffic. Transit providers and content providers assert that consumers have already paid ISPs to transmit the content they request and that peering agreements should be settlement-free. Our goal is to determine the fair payment between an ISP and an interconnecting network. We consider fair cost sharing between two Tier-1 ISPs, and derive the peering fee that equalizes their net backbone transportation costs. We then consider fair cost sharing between an ISP and a transit provider. We derive the peering fee that equalizes their net backbone transportation costs, and illustrate how it depends on the traffic ratio and the amount of localization of that content. Finally, we consider the fair peering fee between an ISP and a content provider. We derive the peering fee that results in the same net cost to the ISP, and illustrate how the peering fee depends on the number of interconnection points and the amount of localization of that content. We dispense with the ISP argument that it should be paid regardless of the amount of localization of content. 
    more » « less
  3. We investigate cost-efficient upgrade strategies for capacity enhancement in optical backbone networks enabled by C+L-band optical line systems. A multi-period strategy for upgrading network links from the C band to the C+L band is proposed, ensuring physical-layer awareness, cost effectiveness, and less than 0.1% blocking. Results indicate that the performance of an upgrade strategy depends on efficient selection of the sequence of links to be upgraded and on the time instant to upgrade, which are either topology or traffic dependent. Given a network topology, a set of traffic demands, and growth projections, our illustrative numerical results show that a well-devised upgrade strategy can achieve superior cost efficiency during the capacity upgrade to C+L enhancement.

     
    more » « less
  4. Influence maximization aims to select k most-influential vertices or seeds in a network, where influence is defined by a given diffusion process. Although computing optimal seed set is NP-Hard, efficient approximation algorithms exist. However, even state-of-the-art parallel implementations are limited by a sampling step that incurs large memory footprints. This in turn limits the problem size reach and approximation quality. In this work, we study the memory footprint of the sampling process collecting reverse reachability information in the IMM (Influence Maximization via Martingales) algorithm over large real-world social networks. We present a memory-efficient optimization approach (called HBMax) based on Ripples, a state-of-the-art multi-threaded parallel influence maximization solution. Our approach, HBMax, uses a portion of the reverse reachable (RR) sets collected by the algorithm to learn the characteristics of the graph. Then, it compresses the intermediate reverse reachability information with Huffman coding or bitmap coding, and queries on the partially decoded data, or directly on the compressed data to preserve the memory savings obtained through compression. Considering a NUMA architecture, we scale up our solution on 64 CPU cores and reduce the memory footprint by up to 82.1% with average 6.3% speedup (encoding overhead is offset by performance gain from memory reduction) without loss of accuracy. For the largest tested graph Twitter7 (with 1.4 billion edges), HBMax achieves 5.9× compression ratio and 2.2× speedup. 
    more » « less
  5. In-network caching constitutes a promising approach to reduce traffic loads and alleviate congestion in both wired and wireless networks. In this paper, we study the joint caching and routing problem in congestible networks of arbitrary topology (JoCRAT) as a generalization of previous efforts in this particular field. We show that JoCRAT extends many previous problems in the caching literature that are intractable even with specific topologies and/or assumed unlimited bandwidth of communications. To handle this significant but challenging problem, we develop a novel approximation algorithm with guaranteed performance bound based on a randomized rounding technique. Evaluation results demonstrate that our proposed algorithm achieves nearoptimal performance over a broad array of synthetic and real networks, while significantly outperforming the state-of-the-art methods. 
    more » « less