Much of our understanding of congestion control algorithm (CCA) throughput and fairness is derived from models and measurements that (implicitly) assume congestion occurs in the last mile. That is, these studies evaluated CCAs in “small scale” edge settings at the scale of tens of flows and up to a few hundred Mbps bandwidths. However, recent measurements show that congestion can also occur at the core of the Internet on inter-provider links, where thousands of flows share high bandwidth links. Hence, a natural question is: Does our understanding of CCA throughput and fairness continue to hold at the scale found in the core of the Internet, with 1000s of flows and Gbps bandwidths? Our preliminary experimental study finds that some expectations derived in the edge setting do not hold at scale. For example, using loss rate as a parameter to the Mathis model to estimate TCP NewReno throughput works well in edge settings, but does not provide accurate throughput estimates when thousands of flows compete at high bandwidths. In addition, BBR – which achieves good fairness at the edge when competing solely with other BBR flows – can become very unfair to other BBR flows at the scale of the core of the Internet. In this paper, we discuss these results and others, as well as key implications for future CCA analysis and evaluation.
more »
« less
Modeling BBR’s Interactions with Loss-Based Congestion Control
BBR is a new congestion control algorithm (CCA) deployed for Chromium QUIC and the Linux kernel. As the default CCA for YouTube (which commands 11+% of Internet traffic), BBR has rapidly become a major player in Internet congestion control. BBR’s fairness or friendliness to other connections has recently come under scrutiny as measurements from multiple research groups have shown undesirable outcomes when BBR competes with traditional CCAs. One such outcome is a fixed, 40% proportion of link capacity consumed by a single BBR flow when competing with as many as 16 loss-based algorithms like Cubic or Reno. In this short paper, we provide the first model capturing BBR’s behavior in competition with loss-based CCAs. Our model is coupled with practical experiments to validate its implications. The key lesson is this: under competition, BBR becomes window-limited by its ‘in-flight cap’ which then determines BBR’s bandwidth consumption. By modeling the value of BBR’s in-flight cap under varying network conditions, we can predict BBR’s throughput when competing against Cubic flows with a median error of 5%, and against Reno with a median of 8%.
more »
« less
- Award ID(s):
- 1850384
- PAR ID:
- 10121285
- Date Published:
- Journal Name:
- Proceedings of the ACM SIGCOMM Internet Measurement Conference
- ISSN:
- 2150-3761
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Congestion Control Algorithms (CCAs) impact numerous desirable Internet properties such as performance, stability, and fairness. Hence, the networking community invests substantial effort into studying whether new algorithms are safe for wide-scale deployment. However, operators today are continuously innovating and some deployed CCAs are unpublished - either because the CCA is in beta or because it is considered proprietary. How can the networking community evaluate these new CCAs when their inner workings are unknown? In this paper, we propose 'counterfeit congestion control algorithms' - reverse-engineered implementations derived using program synthesis based on observations of the original implementation. Using the counterfeit (synthesized) CCA implementation, researchers can then evaluate the CCA using controlled empirical testbeds or mathematical analysis, even without access to the original implementation. Our initial prototype, 'Mister 880,' can synthesize several basic CCAs including a simplified Reno using only a few traces.more » « less
-
BBR is a new congestion control algorithm proposed by Google that builds a model of the network path consisting of its bottleneck bandwidth and RTT to govern its sending rate rather than packet loss (like CUBIC and many other popular congestion control algorithms). Loss-based congestion control has been shown to be vulnerable to acknowledgment manipulation attacks. However, no prior work has investigated how to design such attacks for BBR, nor how effective they are in practice. In this paper we systematically analyze the vulnerability of BBR to acknowledgement manipulation attacks. We create the first detailed BBR finite state machine and a novel algorithm for inferring its current BBR state at runtime by passively observing network traffic.We then adapt and apply a TCP fuzzer to the Linux TCP BBR v1.0 implementation. Our approach generated 30,297 attack strategies, of which 8,859 misled BBR about actual network conditions. From these, we identify 5 classes of attacks causing BBR to send faster, slower or stall. We also found that BBR is immune to acknowledgment burst, division and duplication attacks that were previously shown to be effective against loss-based congestion control such as TCP New Reno.more » « less
-
BBR is a newer TCP congestion control algorithm with promising features, but it can often be unfair to existing loss-based congestion-control algorithms. This is because BBR's sending rate is dictated by static parameters that do not adapt well to dynamic and diverse network conditions. In this work, we introduce BBR-ML, an in-kernel ML-based tuning system for BBR, designed to improve fairness when in competition with loss-based congestion control. To build BBR-ML, we discretized the network condition search space and trained a model on 2,500 different network conditions. We then modified BBR to run an in-kernel model to predict network buffer sizes, and then use this prediction for optimal parameter settings. Our preliminary evaluation results show that BBR-ML can improve fairness when in competition with Cubic by up to 30% in some cases.more » « less
-
The Internet has become the central source of information and communication in modern society. Congestion control algorithms (CCAs) are critical for the stability of the Internet: ensuring that users can fairly and efficiently share the network. Over the past 30 years, researchers and Internet content providers have proposed and deployed dozens of new CCAs designed to keep up with the growing demands of faster networks, diverse applications, and mobile users. Without tools to understand this growing heterogeneity in CCAs deployed on the Internet, the fairness of the Internet is at stake. Towards understanding this growing heterogeneity, we develop CCAnalyzer, a tool to determine what CCA a particular web service deploys, outperforming previous classifiers in accuracy and efficiency. With CCAnalyzer, we show that new CCAs, both known and unknown, have widespread deployment on the Internet today, including a recently proposed CCA by Google: BBRv1. Next, we develop the first model of BBRv1, and prove BBRv1 can be very unfair to legacy loss-based CCAs, an alarming finding given the prolific deployment of BBRv1. Consequently, we argue the need for a better methodology for determining if a new CCA is safe to deploy on the Internet today. We describe how the typical methodology testing for equal-rate fairness (every user gets the same bandwidth) is both an unachievable goal and ultimately, not the right threshold for determining if a new CCA is safe to deploy alongside others. Instead of equal-rate fairness, we propose a new metric we call, harm, and argue for a harm-based threshold. Lastly, we present RayGen, a novel framework for evaluating interactions between heterogeneous CCAs. RayGen uses a genetic algorithm to efficiently explore the large state space of possible workloads and network settings when two CCAs compete. With a small budget of experiments, RayGen finds more harmful scenarios than a parameter sweep and random search.more » « less
An official website of the United States government

