skip to main content


Title: Efficient Algorithms for Learning Revenue-Maximizing Two-Part Tariffs

A two-part tariff is a pricing scheme that consists of an up-front lump sum fee and a per unit fee. Various products in the real world are sold via a menu, or list, of two-part tariffs---for example gym memberships, cell phone data plans, etc. We study learning high-revenue menus of two-part tariffs from buyer valuation data, in the setting where the mechanism designer has access to samples from the distribution over buyers' values rather than an explicit description thereof. Our algorithms have clear direct uses, and provide the missing piece for the recent generalization theory of two-part tariffs. We present a polynomial time algorithm for optimizing one two-part tariff. We also present an algorithm for optimizing a length-L menu of two-part tariffs with run time exponential in L but polynomial in all other problem parameters. We then generalize the problem to multiple markets. We prove how many samples suffice to guarantee that a two-part tariff scheme that is feasible on the samples is also feasible on a new problem instance with high probability. We then show that computing revenue-maximizing feasible prices is hard even for buyers with additive valuations. Then, for buyers with identical valuation distributions, we present a condition that is sufficient for the two-part tariff scheme from the unsegmented setting to be optimal for the market-segmented setting. Finally, we prove a generalization result that states how many samples suffice so that we can compute the unsegmented solution on the samples and still be guaranteed that we get a near-optimal solution for the market-segmented setting with high probability.

 
more » « less
Award ID(s):
1919453 1901403 1910321
NSF-PAR ID:
10217107
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, {IJCAI-20}
Page Range / eLocation ID:
332 to 338
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract We consider a natural generalization of classical scheduling problems to a setting in which using a time unit for processing a job causes some time-dependent cost, the time-of-use tariff, which must be paid in addition to the standard scheduling cost. We focus on preemptive single-machine scheduling and two classical scheduling cost functions, the sum of (weighted) completion times and the maximum completion time, that is, the makespan. While these problems are easy to solve in the classical scheduling setting, they are considerably more complex when time-of-use tariffs must be considered. We contribute optimal polynomial-time algorithms and best possible approximation algorithms. For the problem of minimizing the total (weighted) completion time on a single machine, we present a polynomial-time algorithm that computes for any given sequence of jobs an optimal schedule, i.e., the optimal set of time slots to be used for preemptively scheduling jobs according to the given sequence. This result is based on dynamic programming using a subtle analysis of the structure of optimal solutions and a potential function argument. With this algorithm, we solve the unweighted problem optimally in polynomial time. For the more general problem, in which jobs may have individual weights, we develop a polynomial-time approximation scheme (PTAS) based on a dual scheduling approach introduced for scheduling on a machine of varying speed. As the weighted problem is strongly NP-hard, our PTAS is the best possible approximation we can hope for. For preemptive scheduling to minimize the makespan, we show that there is a comparably simple optimal algorithm with polynomial running time. This is true even in a certain generalized model with unrelated machines. 
    more » « less
  2. null (Ed.)

    Efficient and truthful mechanisms to price time on remote servers/machines have been the subject of much work in recent years due to the importance of the cloud market. This paper considers online revenue maximization for a unit capacity server, when jobs are non preemptive, in the Bayesian setting: at each time step, one job arrives, with parameters drawn from an underlying distribution.We design an efficiently computable truthful posted price mechanism, which maximizes revenue in expectation and in retrospect, up to additive error. The prices are posted prior to learning the agent's type, and the computed pricing scheme is deterministic.We also show the pricing mechanism is robust to learning the job distribution from samples, where polynomially many samples suffice to obtain near optimal prices.

     
    more » « less
  3. We study the menu complexity of optimal and approximately-optimal auctions in the context of the ``FedEx'' problem, a so-called ``one-and-a-half-dimensional'' setting where a single bidder has both a value and a deadline for receiving an item [FGKK 16]. The menu complexity of an auction is equal to the number of distinct (allocation, price) pairs that a bidder might receive [HN 13]. We show the following when the bidder has $n$ possible deadlines: 1) Exponential menu complexity is necessary to be exactly optimal: There exist instances where the optimal mechanism has menu complexity $\geq 2^n-1$. This matches exactly the upper bound provided by Fiat et al.'s algorithm, and resolves one of their open questions [FGKK 16]. 2) Fully polynomial menu complexity is necessary and sufficient for approximation: For all instances, there exists a mechanism guaranteeing a multiplicative $(1-\epsilon)$-approximation to the optimal revenue with menu complexity $O(n^{3/2}\sqrt{\frac{\min\{n/\epsilon,\ln(v_{\max})\}}{\epsilon}}) = O(n^2/\epsilon)$, where $v_{\max}$ denotes the largest value in the support of integral distributions. \item There exist instances where any mechanism guaranteeing a multiplicative $(1-O(1/n^2))$-approximation to the optimal revenue requires menu complexity $\Omega(n^2)$. Our main technique is the polygon approximation of concave functions [Rote 91], and our results here should be of independent interest. We further show how our techniques can be used to resolve an open question of [DW 17] on the menu complexity of optimal auctions for a budget-constrained buyer. 
    more » « less
  4. Most results in revenue-maximizing mechanism design hinge on “getting the price right”—selling goods to bidders at prices low enough to encourage a sale but high enough to garner nontrivial revenue. This approach is difficult to implement when the seller has little or no a priori information about bidder valuations or when the setting is sufficiently complex, such as matching markets with heterogeneous goods. In this paper, we apply a robust approach to designing auctions for revenue. Instead of relying on prior knowledge regarding bidder valuations, we “let the market do the work” and let prices emerge from competition for scarce goods. We analyze the revenue guarantees of one of the simplest imaginable implementations of this idea: first, we enhance competition in the market by increasing demand (or alternatively, by limiting supply), and second, we run a standard second price (Vickrey) auction. In their renowned work from 1996 , Bulow and Klemperer [Bulow J, Klemperer P (1996) Auctions vs. negotiations. Amer. Econom. Rev. 86(1):180–194.] apply this method to markets with single goods. As our main result, we give the first application beyond single-parameter settings, proving that, simultaneously for many valuation distributions, this method achieves expected revenue at least as good as the optimal revenue in the original market. Our robust and simple approach provides a handle on the elusive optimal revenue in multiitem matching markets and shows when the use of welfare-maximizing Vickrey auctions is justified, even if revenue is a priority. By establishing quantitative tradeoffs, our work provides guidelines for a seller in choosing among two different revenue-extracting strategies: sophisticated pricing based on market research or advertising to draw additional bidders. 
    more » « less
  5. Large fractions of online advertisements are sold via repeated second-price auctions. In these auctions, the reserve price is the main tool for the auctioneer to boost revenues. In this work, we investigate the following question: how can the auctioneer optimize reserve prices by learning from the previous bids while accounting for the long-term incentives and strategic behavior of the bidders? To this end, we consider a seller who repeatedly sells ex ante identical items via a second-price auction. Buyers’ valuations for each item are drawn independently and identically from a distribution F that is unknown to the seller. We find that if the seller attempts to dynamically update a common reserve price based on the bidding history, this creates an incentive for buyers to shade their bids, which can hurt revenue. When there is more than one buyer, incentive compatibility can be restored by using personalized reserve prices, where the personal reserve price for each buyer is set using the historical bids of other buyers. Such a mechanism asymptotically achieves the expected revenue obtained under the static Myerson optimal auction for F. Further, if valuation distributions differ across bidders, the loss relative to the Myerson benchmark is only quadratic in the size of such differences. We extend our results to a contextual setting where the valuations of the buyers depend on observed features of the items. When up-front fees are permitted, we show how the seller can determine such payments based on the bids of others to obtain an approximately incentive-compatible mechanism that extracts nearly all the surplus. 
    more » « less