skip to main content


Title: A price-aware congestion control protocol for cloud services
Abstract

In current infrastructure-as-a service (IaaS) cloud services, customers are charged for the usage of computing/storage resources only, but not the network resource. The difficulty lies in the fact that it is nontrivial to allocate network resource to individual customers effectively, especially for short-lived flows, in terms of both performance and cost, due to highly dynamic environments by flows generated by all customers. To tackle this challenge, in this paper, we propose an end-to-end Price-Aware Congestion Control Protocol (PACCP) for cloud services. PACCP is a network utility maximization (NUM) based optimal congestion control protocol. It supports three different classes of services (CoSes), i.e., best effort service (BE), differentiated service (DS), and minimum rate guaranteed (MRG) service. In PACCP, the desired CoS or rate allocation for a given flow is enabled by properly setting a pair of control parameters, i.e., a minimum guaranteed rate and a utility weight, which in turn, determines the price paid by the user of the flow. Two pricing models, i.e., a coarse-grained VM-Based Pricing model (VBP) and a fine-grained Flow-Based Pricing model (FBP), are proposed. The optimality of PACCP is verified by both large scale simulation and small testbed implementation. The price-performance consistency of PACCP are evaluated using real datacenter workloads. The results demonstrate that PACCP provides minimum rate guarantee, high bandwidth utilization and fair rate allocation, commensurate with the pricing models.

 
more » « less
Award ID(s):
2008835
NSF-PAR ID:
10360309
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Journal of Cloud Computing
Volume:
10
Issue:
1
ISSN:
2192-113X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Amazon introduced spot instances in December 2009, enabling “customers to bid on unused Amazon EC2 capacity and run those instances for as long as their bid exceeds the current Spot Price.” Amazon’s real-time computational spot market was novel in multiple respects. For example, it was the first (and to date only) large-scale public implementation of market-based resource allocation based on dynamic pricing after decades of research, and it provided users with useful information, control knobs, and options for optimizing the cost of running cloud applications. Spot instances also introduced the concept of transient cloud servers derived from variable idle capacity that cloud platforms could revoke at any time. Transient servers have since become central to efficient resource management of modern clusters and clouds. As a result, Amazon’s spot market was the motivation for substantial research over the past decade. Yet, in November 2017, Amazon effectively ended its real-time spot market by announcing that users no longer needed to place bids and that spot prices will “...adjust more gradually, based on longer-term trends in supply and demand.” The changes made spot instances more similar to the fixed-price transient servers offered by other cloud platforms. Unfortunately, while these changes made spot instances less complex, they eliminated many benefits to sophisticated users in optimizing their applications. This paper provides a retrospective on Amazon’s real-time spot market, including its advantages and disadvantages for allocating transient servers compared to current fixed-price approaches. We also discuss some fundamental problems with Amazon’s spot market, which we identified in prior work (from 2016), that predicted its eventual end. We then discuss potential options for allocating transient servers that combine the advantages of Amazon’s real-time spot market, while also addressing the problems that likely led to its elimination. 
    more » « less
  2. In the context of subscription-based services, many technologies improve over time, and service providers can provide increasingly powerful service upgrades to their customers but at a launching cost and the expense of the sales of existing products. We propose a model of technology upgrades and characterize the optimal pricing and timing of technology introductions for a service provider who price-discriminates among customers based on their upgrade experience in the face of customers who are averse to switching to improved offerings. We first characterize optimal discriminatory pricing for the infinite horizon pricing problem with fixed introduction times. We reduce the optimal pricing problem to a tractable optimization problem and propose an efficient algorithm for solving it. Our algorithm computes optimal discriminatory prices within a fraction of a second even for large problem instances. We then show that periodic introduction times, combined with optimal pricing, enjoy optimality guarantees. In particular, we first show that, as long as the introduction intervals are constrained to be nonincreasing, it is optimal to have periodic introductions after an initial warm-up phase. When allowing general introduction intervals, we show that periodic introduction intervals after some time are optimal in a more restricted sense. Numerical experiments suggest that it is generally optimal to have periodic introductions after an initial warm-up phase. Finally, we focus on a setting in which the firm does not price-discriminate based on customers’ experience. We show both analytically and numerically that in the nondiscriminatory setting, a simple policy of Myerson (i.e., myopic) pricing and periodic introductions enjoys good performance guarantees. Funding: This material is based upon work supported by INSEAD and University Pierre et Marie Curie [Grant ELICIT], as well as by the National Science Foundation [Grant 2110707]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/opre.2022.2364 . 
    more » « less
  3. We consider a fundamental pricing model in which a fixed number of units of a reusable resource are used to serve customers. Customers arrive to the system according to a stochastic process and, upon arrival, decide whether to purchase the service, depending on their willingness to pay and the current price. The service time during which the resource is used by the customer is stochastic, and the firm may incur a service cost. This model represents various markets for reusable resources, such as cloud computing, shared vehicles, rotable parts, and hotel rooms. In the present paper, we analyze this pricing problem when the firm attempts to maximize a weighted combination of three central metrics: profit, market share, and service level. Under Poisson arrivals, exponential service times, and standard assumptions on the willingness-to-pay distribution, we establish a series of results that characterize the performance of static pricing in such environments. In particular, although an optimal policy is fully dynamic in such a context, we prove that a static pricing policy simultaneously guarantees 78.9% of the profit, market share, and service level from the optimal policy. Notably, this result holds for any service rate and number of units the firm operates. Our proof technique relies on a judicious construction of a static price that is derived directly from the optimal dynamic pricing policy. In the special case in which there are two units and the induced demand is linear, we also prove that the static policy guarantees 95.5% of the profit from the optimal policy. Our numerical findings on a large test bed of instances suggest that the latter result is quite indicative of the profit obtained by the static pricing policy across all parameters. 
    more » « less
  4. The rapid growth of mobile data traffic is straining cellular networks. A natural approach to alleviate cellular networks congestion is to use, in addition to the cellular interface, secondary interfaces such as WiFi, Dynamic spectrum and mmWave to aid cellular networks in handling mobile traffic. The fundamental question now becomes: How should traffic be distributed over different interfaces, taking into account different application QoS requirements and the diverse nature of radio interfaces. To this end, we propose the Discounted Rate Utility Maximization (DRUM) framework with interface costs as a means to quantify application preferences in terms of throughput, delay, and cost. The flow rate allocation problem can be formulated as a convex optimization problem. However, solving this problem requires non-causal knowledge of the time-varying capacities of all radio interfaces. To this end, we propose an online predictive algorithm that exploits the predictability of wireless connectivity for a small look-ahead window w. We show that, under some mild conditions, the proposed algorithm achieves a constant competitive ratio independent of the time horizon T. Furthermore, the competitive ratio approaches 1 as the prediction window increases. We also propose another predictive algorithm based on the "Receding Horizon Control" principle from control theory that performs very well in practice. Numerical simulations serve to validate our formulation, by showing that under the DRUM framework: the more delay-tolerant the flow, the less it uses the cellular network, preferring to transmit in high rate bursts over the secondary interfaces. Conversely, delay-sensitive flows consistently transmit irrespective of different interfaces' availability. Simulations also show that the proposed online predictive algorithms have a near-optimal performance compared to the offline prescient solution under all considered scenarios. 
    more » « less
  5. d. Many of the infrastructure sectors that are considered to be crucial by the Department of Homeland Security include networked systems (physical and temporal) that function to move some commodity like electricity, people, or even communication from one location of importance to another. The costs associated with these flows make up the price of the network’s normal functionality. These networks have limited capacities, which cause the marginal cost of a unit of flow across an edge to increase as congestion builds. In order to limit the expense of a network’s normal demand we aim to increase the resilience of the system and specifically the resilience of the arc capacities. Divisions of critical infrastructure have faced difficulties in recent years as inadequate resources have been available for needed upgrades and repairs. Without being able to determine future factors that cause damage both minor and extreme to the networks, officials must decide how to best allocate the limited funds now so that these essential systems can withstand the heavy weight of society’s reliance. We model these resource allocation decisions using a two-stage stochastic program (SP) for the purpose of network protection. Starting with a general form for a basic two-stage SP, we enforce assumptions that specify characteristics key to this type of decision model. The second stage objective—which represents the price of the network’s routine functionality—is nonlinear, as it reflects the increasing marginal cost per unit of additional flow across an arc. After the model has been designed properly to reflect the network protection problem, we are left with a nonconvex, nonlinear, nonseparable risk-neutral program. This research focuses on key reformulation techniques that transform the problematic model into one that is convex, separable, and much more solvable. Our approach focuses on using perspective functions to convexify the feasibility set of the second stage and second order conic constraints to represent nonlinear constraints in a form that better allows the use of computational solvers. Once these methods have been applied to the risk-neutral model we introduce a risk measure into the first stage that allows us to control the balance between an efficient, solvable model and the need to hedge against extreme events. Using Benders cuts that exploit linear separability, we give a decomposition and solution algorithm for the general network model. The innovations included in this formulation are then implemented on a transportation network with given flow demand 
    more » « less