Cloud services are deployed in datacenters connected though high-bandwidth Wide Area Networks (WANs). We find that WAN traffic negatively impacts the performance of datacenter traffic, increasing tail latency by 2.5x, despite its small bandwidth demand. This behavior is caused by the long round-trip time (RTT) for WAN traffic, combined with limited buffering in datacenter switches. The long WAN RTT forces datacenter traffic to take the full burden of reacting to congestion. Furthermore, datacenter traffic changes on a faster time-scale than the WAN RTT, making it difficult for WAN congestion control to estimate available bandwidth accurately. We present Annulus, a congestion control scheme that relies on two control loops to address these challenges. One control loop leverages existing congestion control algorithms for bottlenecks where there is only one type of traffic (i.e., WAN or datacenter). The other loop handles bottlenecks shared between WAN and datacenter traffic near the traffic source, using direct feedback from the bottleneck. We implement Annulus on a testbed and in simulation. Compared to baselines using BBR for WAN congestion control and DCTCP or DCQCN for datacenter congestion control, Annulus increases bottleneck utilization by 10% and lowers datacenter flow completion time by 1.3-3.5x.
more »
« less
Principles for Internet Congestion Management
Given the technical flaws with—and the increasing non-observance of—the TCP-friendliness paradigm, we must rethink how the Inter- net should manage bandwidth allocation. We explore this question from first principles, but remain within the constraints of the In- ternet’s current architecture and commercial arrangements. We propose a new framework, Recursive Congestion Shares (RCS), that provides bandwidth allocations independent of which congestion control algorithms flows use but consistent with the Internet’s eco- nomics. We show that RCS achieves this goal using game-theoretic calculations and simulations as well as network emulation.
more »
« less
- PAR ID:
- 10568565
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9798400706141
- Page Range / eLocation ID:
- 166 to 180
- Format(s):
- Medium: X
- Location:
- Sydney NSW Australia
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Motivated by the use of unmanned aerial vehicles (UAVs) for buried landmine detection, we consider the spectral classification of dispersive point targets below a rough air-soil interface. The target location can be estimated using a previously developed method for ground-penetrating synthetic aperture radar involving principal component analysis for ground bounce removal and Kirchhoff migration. For the classification problem, we use the approximate location determined from this imaging method to recover the spectral characteristics of the target over the system bandwidth. For the dispersive point target we use here, this spectrum corresponds to its radar cross section (RCS). For a more general target, this recovered spectrum is a proxy for the frequency dependence of the RCS averaged over angles spanning the synthetic aperture. The recovered spectrum is noisy and exhibits an overall scaling error due to modeling errors. Nonetheless, by smoothing and normalizing this recovered spectrum, we compare it with a library of precomputed normalized spectra in a simple multiclass classification scheme. Numerical simulations in two dimensions validate this method and show that this spectral estimation method is effective for target classification.more » « less
-
Much of our understanding of congestion control algorithm (CCA) throughput and fairness is derived from models and measurements that (implicitly) assume congestion occurs in the last mile. That is, these studies evaluated CCAs in “small scale” edge settings at the scale of tens of flows and up to a few hundred Mbps bandwidths. However, recent measurements show that congestion can also occur at the core of the Internet on inter-provider links, where thousands of flows share high bandwidth links. Hence, a natural question is: Does our understanding of CCA throughput and fairness continue to hold at the scale found in the core of the Internet, with 1000s of flows and Gbps bandwidths? Our preliminary experimental study finds that some expectations derived in the edge setting do not hold at scale. For example, using loss rate as a parameter to the Mathis model to estimate TCP NewReno throughput works well in edge settings, but does not provide accurate throughput estimates when thousands of flows compete at high bandwidths. In addition, BBR – which achieves good fairness at the edge when competing solely with other BBR flows – can become very unfair to other BBR flows at the scale of the core of the Internet. In this paper, we discuss these results and others, as well as key implications for future CCA analysis and evaluation.more » « less
-
Efficiently transferring data over long-distance, high-speed networks requires optimal utilization of available network bandwidth. One effective method to achieve this is through the use of parallel TCP streams. This approach allows applications to leverage network parallelism, thereby enhancing transfer throughput. However, determining the ideal number of parallel TCP streams can be challenging due to non-deterministic background traffic sharing the network, as well as non-stationary and partially observable network signals. We present a novel learning-based approach that utilizes deep reinforcement learning (DRL) to determine the optimal number of parallel TCP streams. Our DRL-based algorithm is designed to intelligently utilize available network bandwidth while adapting to different network conditions. Unlike rule-based heuristics, which lack generalization in unknown network scenarios, our DRL-based solution can dynamically adjust the parallel TCP stream numbers to optimize network bandwidth utilization without causing network congestion and ensuring fairness among competing transfers. We conducted extensive experiments to evaluate our DRL-based algorithm’s performance and compared it with several state-of-the-art online optimization algorithms. The results demonstrate that our algorithm can identify nearly optimal solutions 40% faster while achieving up to 15% higher throughput. Furthermore, we show that our solution can prevent network congestion and distribute the available network resources fairly among competing transfers, unlike a discriminatory algorithm.more » « less
-
H. J. M. Hou and S. I. Allakhverdiev (Ed.)Photosynthetic Reaction Centers (RCs) can be considered blueprints for highly efficient energy transfer. Embedded with an array of cofactors, including (bacterio)chlorophyll ((B)Chl) and (B)pheophytin ((B)Pheo) molecules, RCs function with a high quantum yield that spans a wide spectral range. Understanding the principles that underlie their function can influence the design of the next generation of artificial photosynthetic devices. We are particularly interested in the factors that influence the early stages of light-driven charge separation in RCs. With the recent publication of several highly anticipated RC structures and advanced computational methods available, it is possible to probe both the geometric and electronic structures of an array of RCs. In this chapter, we review the electronic and geometric structures of the (B)Chl and (B)Pheo primary electron acceptors from five RCs, comprising both Type I and Type II RCs and representing both heterodimeric and homodimeric systems. We showcase the dimeric A0●– state of Type I RCs, whereby the unpaired electron is delocalized, to various extents, over two (B)Chl molecules, (B)Chl2 and (B)Chl3. This delocalization is controlled by several factors, including the structure of the (B)Chls, interactions with the surrounding protein matrix, and the orientation and distances of the cofactors themselves. In contrast, the primary acceptors of Type II RCs are entirely monomeric, with electron density residing solely on the (B)Pheo. We compare the natural design of the primary acceptors of the Type I and Type II RCs from both an evolutionary and application based perspective.more » « less
An official website of the United States government

