The assignment game, introduced by Shapley and Shubik [1971], is a classic model for two-sided matching markets between buyers and sellers. In the original assignment game, it is assumed that payments lead to transferable utility and that buyers have unit-demand valuations for the items being sold. There has since been substantial work studying various extensions of the assignment game. The first main area of extension is to imperfectly transferable utility, which is when frictions, taxes, or fees impede the transfer of money between agents. The second is with more complex valuation functions, in particular gross substitutes valuations, which describe substitutable goods. Multiple efficient algorithms have been proposed for computing a competitive equilibrium, the standard solution concept in assignment games, in each of these two settings. However, these lines of work have been mostly independent, with no algorithmic results combining the two. Our main result is an efficient algorithm for computing competitive equilibria in a setting encompassing both those generalizations. We assume that sellers have multiple copies of each items. A buyer $$i$$'s quasi-linear utility is given by their gross substitute valuation for the bundle $$S$$ of items they are assigned to, minus the sum of the payments $$q_{ij}(p_j)$$ for each item $$j \in S$$, where $$p_j$$ is the price of item $$j$$ and $$q_{ij}$$ is piecewise linear, strictly increasing. Our algorithm combines procedures for matroid intersection problems with augmenting forest techniques from matching theory. We also show that in a mild generalization of our model without quasilinear utilities, computing a competitive equilibrium is NP-hard.
more »
« less
Non-Quasi-Linear Agents in Quasi-Linear Mechanisms (Extended Abstract)
Mechanisms with money are commonly designed under the assumption that agents are quasi-linear, meaning they have linear disutility for spending money. We study the implications when agents with non-linear (specifically, convex) disutility for payments participate in mechanisms designed for quasi-linear agents. We first show that any mechanism that is truthful for quasi-linear buyers has a simple best response function for buyers with non-linear disutility from payments, in which each bidder simply scales down her value for each potential outcome by a fixed factor, equal to her target return on investment (ROI). We call such a strategy ROI-optimal. We prove the existence of a Nash equilibrium in which agents use ROI-optimal strategies for a general class of allocation problems. Motivated by online marketplaces, we then focus on simultaneous second-price auctions for additive bidders and show that all ROI-optimal equilibria in this setting achieve constant-factor approximations to suitable welfare and revenue benchmarks.
more »
« less
- Award ID(s):
- 1909538
- PAR ID:
- 10248826
- Date Published:
- Journal Name:
- Leibniz international proceedings in informatics
- Volume:
- 185
- ISSN:
- 1868-8969
- Page Range / eLocation ID:
- 84:1-84:1
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We study linear Fisher markets with satiation. In these markets, sellers have earning limits, and buyers have utility limits. Beyond applications in economics, they arise in the context of maximizing Nash social welfare when allocating indivisible items to agents. In contrast to markets with either earning or utility limits, markets with both limits have not been studied before. They turn out to have fundamentally different properties. In general, the existence of competitive equilibria is not guaranteed. We identify a natural property of markets (termed money clearing) that implies existence. We show that the set of equilibria is not always convex, answering a question posed in the literature. We design an FPTAS to compute an approximate equilibrium and prove that the problem of computing an exact equilibrium lies in the complexity class continuous local search ([Formula: see text]; i.e., the intersection of polynomial local search ([Formula: see text]) and polynomial parity arguments on directed graphs ([Formula: see text])). For a constant number of buyers or goods, we give a polynomial-time algorithm to compute an exact equilibrium. We show how (approximate) equilibria can be rounded and provide the first constant-factor approximation algorithm (with a factor of 2.404) for maximizing Nash social welfare when agents have capped linear (also known as budget-additive) valuations. Finally, we significantly improve the approximation hardness for additive valuations to [Formula: see text]. Funding: J. Garg was supported by the National Science Foundation [Grant CCF-1942321 (CAREER)]. M. Hoefer was supported by Deutsche Forschungsgemeinschaft [Grants Ho 3831/5-1, Ho 3831/6-1, and Ho 3831/7-1].more » « less
-
Abstract We explore why many recently proposed robust estimation problems are efficiently solvable, even though the underlying optimization problems are non-convex. We study the loss landscape of these robust estimation problems, and identify the existence of ’generalized quasi-gradients’. Whenever these quasi-gradients exist, a large family of no-regret algorithms are guaranteed to approximate the global minimum; this includes the commonly used filtering algorithm. For robust mean estimation of distributions under bounded covariance, we show that any first-order stationary point of the associated optimization problem is an approximate global minimum if and only if the corruption level $$\epsilon < 1/3$$. Consequently, any optimization algorithm that approaches a stationary point yields an efficient robust estimator with breakdown point $1/3$. With carefully designed initialization and step size, we improve this to $1/2$, which is optimal. For other tasks, including linear regression and joint mean and covariance estimation, the loss landscape is more rugged: there are stationary points arbitrarily far from the global minimum. Nevertheless, we show that generalized quasi-gradients exist and construct efficient algorithms. These algorithms are simpler than previous ones in the literature, and for linear regression we improve the estimation error from $$O(\sqrt{\epsilon })$$ to the optimal rate of $$O(\epsilon )$$ for small $$\epsilon $$ assuming certified hypercontractivity. For mean estimation with near-identity covariance, we show that a simple gradient descent algorithm achieves breakdown point $1/3$ and iteration complexity $$\tilde{O}(d/\epsilon ^2)$$.more » « less
-
We study the problem of fairly and efficiently allocating indivisible chores among agents with additive disutility functions. We consider the widely used envy-based fairness properties of EF1 and EFX in conjunction with the efficiency property of fractional Pareto-optimality (fPO). Existence (and computation) of an allocation that is simultaneously EF1/EFX and fPO are challenging open problems, and we make progress on both of them. We show the existence of an allocation that is- EF1 + fPO, when there are three agents,- EF1 + fPO, when there are at most two disutility functions,- EFX + fPO, for three agents with bivalued disutility functions.These results are constructive, based on strongly polynomial-time algorithms. We also investigate non-existence and show that an allocation that is EFX+fPO need not exist, even for two agents.more » « less
-
Abstract In this paper, we introduce a quasi-Newton method optimized for efficiently solving quasi-linear elliptic equations and systems, with a specific focus on GPU-based computation. By approximating the Jacobian matrix with a combination of linear Laplacian and simplified nonlinear terms, our method reduces the computational overhead typical of traditional Newton methods while handling the large, sparse matrices generated from discretized PDEs. We also provide a convergence analysis demonstrating local convergence to the exact solution under optimal choices for the regularization parameter, ensuring stability and efficiency in each iteration. Numerical experiments in two- and three-dimensional domains validate the proposed method’s robustness and computational gains with tensor product implementation. This approach offers a promising pathway for accelerating quasi-linear elliptic equations and systems solvers, expanding the feasibility of complex simulations in physics, engineering, and other fields leveraging advanced hardware capabilities.more » « less
An official website of the United States government

