The Internet has become the central source of information and communication in modern society. Congestion control algorithms (CCAs) are critical for the stability of the Internet: ensuring that users can fairly and efficiently share the network. Over the past 30 years, researchers and Internet content providers have proposed and deployed dozens of new CCAs designed to keep up with the growing demands of faster networks, diverse applications, and mobile users. Without tools to understand this growing heterogeneity in CCAs deployed on the Internet, the fairness of the Internet is at stake. Towards understanding this growing heterogeneity, we develop CCAnalyzer, a tool to determine what CCA a particular web service deploys, outperforming previous classifiers in accuracy and efficiency. With CCAnalyzer, we show that new CCAs, both known and unknown, have widespread deployment on the Internet today, including a recently proposed CCA by Google: BBRv1. Next, we develop the first model of BBRv1, and prove BBRv1 can be very unfair to legacy loss-based CCAs, an alarming finding given the prolific deployment of BBRv1. Consequently, we argue the need for a better methodology for determining if a new CCA is safe to deploy on the Internet today. We describe how the typical methodology testing for equal-rate fairness (every user gets the same bandwidth) is both an unachievable goal and ultimately, not the right threshold for determining if a new CCA is safe to deploy alongside others. Instead of equal-rate fairness, we propose a new metric we call, harm, and argue for a harm-based threshold. Lastly, we present RayGen, a novel framework for evaluating interactions between heterogeneous CCAs. RayGen uses a genetic algorithm to efficiently explore the large state space of possible workloads and network settings when two CCAs compete. With a small budget of experiments, RayGen finds more harmful scenarios than a parameter sweep and random search. 
                        more » 
                        « less   
                    This content will become publicly available on October 12, 2026
                            
                            Uni-MPTCP(⃗ω, n ): a Unified MPTCP Congestion Control Algorithm
                        
                    
    
            A fundamental design principle of MultiPath TCP (MPTCP) congestion control algorithm (CCA) is that an MPTCP flow should be fair to and do not harm TCP flows. Unfortunately, to deal with cost heterogeneity among subflow interfaces, the existing cost-aware MPTCP CCAs often violate this design principle in an attempt to minimize the cost. Based on the network utility maximization (NUM) framework, we put forward Uni-MPTCP(⃗ω, n ), a NUM-optimal, Unified MPTCP CCA with n subflow paths and a n-dimension weight vector⃗ ω with n − 1 independent elements. Uni-MPTCP(⃗ω, n ) abides by this design principle for arbitrary⃗ω and can be customized to achieve specific cost design objectives with proper adaptation of⃗ω . As such, Uni-MPTCP(⃗ω, n ) provides a unified solution to enable cost-aware MPTCP CCAs, while adhering to the design principle. Finally, we put forward an adaptation algorithm for, ω, in Uni-MPTCP(ω, 2), aiming at maintaining a target MPTCP flow rate with minimum cost for a cost-heterogeneity case with dual connectivity. The test results based on NS-3 simulation demonstrate that Uni-MPTCP(ω, 2) can indeed effectively keep track of a given flow rate target with minimum cost, while adhering to the design principle. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2226117
- PAR ID:
- 10636343
- Publisher / Repository:
- IEEE
- Date Published:
- Format(s):
- Medium: X
- Location:
- Sydney, Australia
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Recent congestion control research has focused on purpose-built algorithms designed for the special needs of specific applications. Often, limited testing before deploying a CCA results in unforeseen and hard-to-debug performance issues due to the complex ways a CCA interacts with other existing CCAs and diverse network environments. We present CC-Fuzz, an automated framework that uses genetic search algorithms to generate adversarial network traces and traffic patterns for stress-testing CCAs. Initial results include CC-Fuzz automatically finding a bug in BBR that causes it to stall permanently, and automatically discovering the well-known low-rate TCP attack, among other things.more » « less
- 
            ACM (Ed.)The well-known susceptibility of millimeter wave links to human blockage and client mobility has recently motivated researchers to propose approaches that leverage both 802.11ad radios (operating in the 60 GHz band) and legacy 802.11ac radios (operating in the 5 GHz band) in dual-band commercial off-the-shelf devices to simultaneously provide Gbps throughput and reliability. One such approach is via Multipath TCP (MPTCP), a transport layer protocol that is transparent to applications and requires no changes to the underlying wireless drivers. However, MPTCP (as well as other bundling approaches) have only been evaluated to date in 60 GHz WLANs with laptop clients. In this work, we port for first time the MPTCP source code to a dual-band smartphone equipped with an 802.11ad and an 802.11ac radio. We discuss the challenges we face and the system-level optimizations required to enable the phone to support Gbps data rates and yield optimal MPTCP throughput (i.e., the sum of the individual throughputs of the two radios) under ideal conditions. We also evaluate for first time the power consumption of MPTCP in a dual-band 802.11ad/ac smartphone and provide recommendations towards the design of an energy-aware MPTCP scheduler. We make our source code publicly available to enable other researchers to experiment with MPTCP in smartphones equipped with millimeter wave radios.more » « less
- 
            Much of our understanding of congestion control algorithm (CCA) throughput and fairness is derived from models and measurements that (implicitly) assume congestion occurs in the last mile. That is, these studies evaluated CCAs in “small scale” edge settings at the scale of tens of flows and up to a few hundred Mbps bandwidths. However, recent measurements show that congestion can also occur at the core of the Internet on inter-provider links, where thousands of flows share high bandwidth links. Hence, a natural question is: Does our understanding of CCA throughput and fairness continue to hold at the scale found in the core of the Internet, with 1000s of flows and Gbps bandwidths? Our preliminary experimental study finds that some expectations derived in the edge setting do not hold at scale. For example, using loss rate as a parameter to the Mathis model to estimate TCP NewReno throughput works well in edge settings, but does not provide accurate throughput estimates when thousands of flows compete at high bandwidths. In addition, BBR – which achieves good fairness at the edge when competing solely with other BBR flows – can become very unfair to other BBR flows at the scale of the core of the Internet. In this paper, we discuss these results and others, as well as key implications for future CCA analysis and evaluation.more » « less
- 
            BBR is a new congestion control algorithm (CCA) deployed for Chromium QUIC and the Linux kernel. As the default CCA for YouTube (which commands 11+% of Internet traffic), BBR has rapidly become a major player in Internet congestion control. BBR’s fairness or friendliness to other connections has recently come under scrutiny as measurements from multiple research groups have shown undesirable outcomes when BBR competes with traditional CCAs. One such outcome is a fixed, 40% proportion of link capacity consumed by a single BBR flow when competing with as many as 16 loss-based algorithms like Cubic or Reno. In this short paper, we provide the first model capturing BBR’s behavior in competition with loss-based CCAs. Our model is coupled with practical experiments to validate its implications. The key lesson is this: under competition, BBR becomes window-limited by its ‘in-flight cap’ which then determines BBR’s bandwidth consumption. By modeling the value of BBR’s in-flight cap under varying network conditions, we can predict BBR’s throughput when competing against Cubic flows with a median error of 5%, and against Reno with a median of 8%.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
