skip to main content


Title: Pricing ordered items
We study the revenue guarantees and approximability of item pricing. Recent work shows that with n heterogeneous items, item-pricing guarantees an O(logn) approximation to the optimal revenue achievable by any (buy-many) mechanism, even when buyers have arbitrarily combinatorial valuations. However, finding good item prices is challenging – it is known that even under unit-demand valuations, it is NP-hard to find item prices that approximate the revenue of the optimal item pricing better than O(√n). Our work provides a more fine-grained analysis of the revenue guarantees and computational complexity in terms of the number of item “categories” which may be significantly fewer than n. We assume the items are partitioned in k categories so that items within a category are totally-ordered and a buyer’s value for a bundle depends only on the best item contained from every category. We show that item-pricing guarantees an O(logk) approximation to the optimal (buy-many) revenue and provide a PTAS for computing the optimal item-pricing when k is constant. We also provide a matching lower bound showing that the problem is (strongly) NP-hard even when k=1. Our results naturally extend to the case where items are only partially ordered, in which case the revenue guarantees and computational complexity depend on the width of the partial ordering, i.e. the largest set for which no two items are comparable.  more » « less
Award ID(s):
2225259 2144298 2008006
NSF-PAR ID:
10337929
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing
Page Range / eLocation ID:
722 to 735
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Online pricing has been the focus of extensive research in recent years, particularly in the context of selling an item to sequentially arriving users. However, what if a provider wants to maximize revenue by selling multiple items to multiple users in each round? This presents a complex problem, as the provider must intelligently offer the items to those users who value them the most without exceeding their highest acceptable prices. In this study, we tackle this challenge by designing online algorithms that can efficiently offer and price items while learning user valuations from accept/reject feedback. We focus on three user valuation models (fixed valuations, random experiences, and random valuations) and provide algorithms with nearly-optimal revenue regret guarantees. In particular, for any market setting with N users, M items, and load L (which roughly corresponds to the maximum number of simultaneous allocations possible), our algorithms achieve regret of order O(NMloglog(LT)) under fixed valuations model, O(√NMLT) under random experiences model and O(√NMLT) under random valuations model in T rounds. 
    more » « less
  2. null (Ed.)
    We study the power of selling opaque products, that is, products where a feature (such as color) is hidden from the customer until after purchase. Opaque products, which are sold with a price discount, have emerged as a powerful vehicle to increase revenue for many online retailers and service providers that offer horizontally differentiated items. In the opaque selling models we consider, all of the items are sold at a single common price alongside opaque products that may correspond to various subsets of the items. We consider two types of customers, risk-neutral ones, who assume they will receive a truly random item of the opaque product, and pessimistic ones, who assume they will receive their least favorite item of the opaque product. We benchmark opaque selling against two common selling strategies: discriminatory pricing, where one explicitly charges different prices for each item, and single pricing, where a single price is charged for all the items. We give a sharp characterization of when opaque selling outperforms discriminatory pricing; namely, this result holds for situations where all customers are pessimistic or the item valuations are supported on two points. In the latter case, we also show that opaque selling with just one opaque product guarantees at least 71.9% of the revenue from discriminatory pricing. We then provide upper bounds on the potential revenue increase from opaque selling strategies over single pricing and describe cases where the increase can be significantly more than that of discriminatory pricing. Finally, we provide pricing algorithms and conduct an extensive numerical study to assess the power of opaque selling for a variety valuation distributions and model extensions. This paper was accepted by Gabriel Weintraub, revenue management and market analytics. 
    more » « less
  3. Most results in revenue-maximizing mechanism design hinge on “getting the price right”—selling goods to bidders at prices low enough to encourage a sale but high enough to garner nontrivial revenue. This approach is difficult to implement when the seller has little or no a priori information about bidder valuations or when the setting is sufficiently complex, such as matching markets with heterogeneous goods. In this paper, we apply a robust approach to designing auctions for revenue. Instead of relying on prior knowledge regarding bidder valuations, we “let the market do the work” and let prices emerge from competition for scarce goods. We analyze the revenue guarantees of one of the simplest imaginable implementations of this idea: first, we enhance competition in the market by increasing demand (or alternatively, by limiting supply), and second, we run a standard second price (Vickrey) auction. In their renowned work from 1996 , Bulow and Klemperer [Bulow J, Klemperer P (1996) Auctions vs. negotiations. Amer. Econom. Rev. 86(1):180–194.] apply this method to markets with single goods. As our main result, we give the first application beyond single-parameter settings, proving that, simultaneously for many valuation distributions, this method achieves expected revenue at least as good as the optimal revenue in the original market. Our robust and simple approach provides a handle on the elusive optimal revenue in multiitem matching markets and shows when the use of welfare-maximizing Vickrey auctions is justified, even if revenue is a priority. By establishing quantitative tradeoffs, our work provides guidelines for a seller in choosing among two different revenue-extracting strategies: sophisticated pricing based on market research or advertising to draw additional bidders. 
    more » « less
  4. We study the problem of approximating maximum Nash social welfare (NSW) when allocating m indivisible items among n asymmetric agents with submodular valuations. The NSW is a well-established notion of fairness and efficiency, defined as the weighted geometric mean of agents' valuations. For special cases of the problem with symmetric agents and additive(-like) valuation functions, approximation algorithms have been designed using approaches customized for these specific settings, and they fail to extend to more general settings. Hence, no approximation algorithm with factor independent of m is known either for asymmetric agents with additive valuations or for symmetric agents beyond additive(-like) valuations. In this paper, we extend our understanding of the NSW problem to far more general settings. Our main contribution is two approximation algorithms for asymmetric agents with additive and submodular valuations respectively. Both algorithms are simple to understand and involve non-trivial modifications of a greedy repeated matchings approach. Allocations of high valued items are done separately by un-matching certain items and re-matching them, by processes that are different in both algorithms. We show that these approaches achieve approximation factors of O(n) and O(n log n) for additive and submodular case respectively, which is independent of the number of items. For additive valuations, our algorithm outputs an allocation that also achieves the fairness property of envy-free up to one item (EF1). Furthermore, we show that the NSW problem under submodular valuations is strictly harder than all currently known settings with an e/(e-1) factor of the hardness of approximation, even for constantly many agents. For this case, we provide a different approximation algorithm that achieves a factor of e/(e-1), hence resolving it completely. 
    more » « less
  5. We study the sample complexity of learning revenue-optimal multi-item auctions. We obtain the first set of positive results that go beyond the standard but unrealistic setting of item-independence. In particular, we consider settings where bidders' valuations are drawn from correlated distributions that can be captured by Markov Random Fields or Bayesian Networks -- two of the most prominent graphical models. We establish parametrized sample complexity bounds for learning an up-to-ε optimal mechanism in both models, which scale polynomially in the size of the model, i.e. the number of items and bidders, and only exponential in the natural complexity measure of the model, namely either the largest in-degree (for Bayesian Networks) or the size of the largest hyper-edge (for Markov Random Fields). We obtain our learnability results through a novel and modular framework that involves first proving a robustness theorem. We show that, given only "approximate distributions" for bidder valuations, we can learn a mechanism whose revenue is nearly optimal simultaneously for all "true distributions" that are close to the ones we were given in Prokhorov distance. Thus, to learn a good mechanism, it suffices to learn approximate distributions. When item values are independent, learning in Prokhorov distance is immediate, hence our framework directly implies the main result of Gonczarowski and Weinberg. When item values are sampled from more general graphical models, we combine our robustness theorem with novel sample complexity results for learning Markov Random Fields or Bayesian Networks in Prokhorov distance, which may be of independent interest. Finally, in the single-item case, our robustness result can be strengthened to hold under an even weaker distribution distance, the Levy distance. 
    more » « less