skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on November 4, 2025

Title: Reverse-Engineering Congestion Control Algorithm Behavior
The rise of proprietary and novel congestion control algorithms (CCAs) opens questions about the future of Internet utilization, latency, and fairness. However, fully analyzing how novel CCAs impact these properties requires understanding the inner workings of these algorithms. We thus aim to reverse-engineer deployed CCAs' behavior from collected packet traces to facilitate analyzing them. We present Abagnale, a program synthesis pipeline that helps users automate the reverse-engineering task. Using Abagnale, we discover simple expressions capturing the behavior of 9 of the 16 CCAs distributed with the Linux kernel and analyze 7 CCAs from a graduate networking course.  more » « less
Award ID(s):
2212390
PAR ID:
10639917
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
Page Range / eLocation ID:
401 to 414
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Congestion Control Algorithms (CCAs) impact numerous desirable Internet properties such as performance, stability, and fairness. Hence, the networking community invests substantial effort into studying whether new algorithms are safe for wide-scale deployment. However, operators today are continuously innovating and some deployed CCAs are unpublished - either because the CCA is in beta or because it is considered proprietary. How can the networking community evaluate these new CCAs when their inner workings are unknown? In this paper, we propose 'counterfeit congestion control algorithms' - reverse-engineered implementations derived using program synthesis based on observations of the original implementation. Using the counterfeit (synthesized) CCA implementation, researchers can then evaluate the CCA using controlled empirical testbeds or mathematical analysis, even without access to the original implementation. Our initial prototype, 'Mister 880,' can synthesize several basic CCAs including a simplified Reno using only a few traces. 
    more » « less
  2. The Internet has become the central source of information and communication in modern society. Congestion control algorithms (CCAs) are critical for the stability of the Internet: ensuring that users can fairly and efficiently share the network. Over the past 30 years, researchers and Internet content providers have proposed and deployed dozens of new CCAs designed to keep up with the growing demands of faster networks, diverse applications, and mobile users. Without tools to understand this growing heterogeneity in CCAs deployed on the Internet, the fairness of the Internet is at stake. Towards understanding this growing heterogeneity, we develop CCAnalyzer, a tool to determine what CCA a particular web service deploys, outperforming previous classifiers in accuracy and efficiency. With CCAnalyzer, we show that new CCAs, both known and unknown, have widespread deployment on the Internet today, including a recently proposed CCA by Google: BBRv1. Next, we develop the first model of BBRv1, and prove BBRv1 can be very unfair to legacy loss-based CCAs, an alarming finding given the prolific deployment of BBRv1. Consequently, we argue the need for a better methodology for determining if a new CCA is safe to deploy on the Internet today. We describe how the typical methodology testing for equal-rate fairness (every user gets the same bandwidth) is both an unachievable goal and ultimately, not the right threshold for determining if a new CCA is safe to deploy alongside others. Instead of equal-rate fairness, we propose a new metric we call, harm, and argue for a harm-based threshold. Lastly, we present RayGen, a novel framework for evaluating interactions between heterogeneous CCAs. RayGen uses a genetic algorithm to efficiently explore the large state space of possible workloads and network settings when two CCAs compete. With a small budget of experiments, RayGen finds more harmful scenarios than a parameter sweep and random search. 
    more » « less
  3. The performance of Internet services—be it file download completion times, video quality, or lag-free video conferencing—is heavily influenced by network parameters. These include the bottleneck bandwidth, network delays, and how fairly the bottleneck link is shared with other services. However, current techniques to evaluate service performance in emulated and simulated networks suffer from three major issues: (a) testing predominantly in settings representing the "edge" of the Internet, and not the core; (b) focus on evaluating Congestion Control Algorithms (CCAs), neglecting the impact of application-level controls like Adaptive-Bitrate (ABR) algorithms on network performance; (c) testing in settings that do not necessarily reflect the network conditions experienced by services with expansive CDNs. The goal of this thesis is to improve the state of the art in emulated testing for a more up-to-date evaluation of Internet service performance. To highlight the need to perform Internet evaluations in settings representing congestion at the core of the Internet, we test CCAs with core Internet speeds and flow counts. We find that this dramatically alters fairness outcomes, and challenges long-standing assumptions about CCA behavior that were built on measurements performed at in settings representing the edge of the Internet, emphasizing the need to run Internet evaluations in more diverse settings. We then challenge the implicit assumption that CCA evaluations alone are suf- ficient to predict the network behavior of services that use them. We perform this analysis through the lens of fairness, and build Prudentia, an Internet fairness watch- dog, that measures how fairly two Internet services can share a bottleneck link. In addition to discovering extreme unfairness on the Internet today, we gain key insights into improving current testing methodology – (a) The most and least fair services both use variants of the same CCA, highlighting the need to test services in addition to CCAs; (b) network settings can drastically affect even service-level fairness outcomes, necessitating their careful selection. Lastly, we infer the network conditions experienced by users of Netflix, a global video streaming provider, and contrast them with those used in typical Internet evaluations. We find that Netflix users experience shorter RTTs, greater maximum observed queuing delay, and greater ACK aggregation, all parameters that play an important role in determining CCA behavior. This highlights the need for more service operators to run similar analyses and share their respective perspectives of prevalent network conditions, so that the networking community can include these settings in the design and evaluation of Internet services. 
    more » « less
  4. Recent congestion control research has focused on purpose-built algorithms designed for the special needs of specific applications. Often, limited testing before deploying a CCA results in unforeseen and hard-to-debug performance issues due to the complex ways a CCA interacts with other existing CCAs and diverse network environments. We present CC-Fuzz, an automated framework that uses genetic search algorithms to generate adversarial network traces and traffic patterns for stress-testing CCAs. Initial results include CC-Fuzz automatically finding a bug in BBR that causes it to stall permanently, and automatically discovering the well-known low-rate TCP attack, among other things. 
    more » « less
  5. BBR is a new congestion control algorithm (CCA) deployed for Chromium QUIC and the Linux kernel. As the default CCA for YouTube (which commands 11+% of Internet traffic), BBR has rapidly become a major player in Internet congestion control. BBR’s fairness or friendliness to other connections has recently come under scrutiny as measurements from multiple research groups have shown undesirable outcomes when BBR competes with traditional CCAs. One such outcome is a fixed, 40% proportion of link capacity consumed by a single BBR flow when competing with as many as 16 loss-based algorithms like Cubic or Reno. In this short paper, we provide the first model capturing BBR’s behavior in competition with loss-based CCAs. Our model is coupled with practical experiments to validate its implications. The key lesson is this: under competition, BBR becomes window-limited by its ‘in-flight cap’ which then determines BBR’s bandwidth consumption. By modeling the value of BBR’s in-flight cap under varying network conditions, we can predict BBR’s throughput when competing against Cubic flows with a median error of 5%, and against Reno with a median of 8%. 
    more » « less