Consider a principal who wants to search through a space of stochastic solutions for one maximizing their utility. If the principal cannot conduct this search on their own, they may instead delegate this problem to an agent with distinct and potentially misaligned utilities. This is called delegated search, and the principal in such problems faces a mechanism design problem in which they must incentivize the agent to find and propose a solution maximizing the principal's expected utility. Following prior work in this area, we consider mechanisms without payments and aim to achieve a multiplicative approximation of the principal's utility when they solve the problem without delegation. In this work, we investigate a natural and recently studied generalization of this model to multiple agents and find nearly tight bounds on the principal's approximation as the number of agents increases. As one might expect, this approximation approaches 1 with increasing numbers of agents, but, somewhat surprisingly, we show that this is largely not due to direct competition among agents.
more »
« less
Delegated Search Approximates Efficient Search
There are many settings in which a principal performs a task by delegating it to an agent, who searches over possible solutions and proposes one to the principal. This describes many aspects of the workflow within organizations, as well as many of the activities undertaken by regulatory bodies, who often obtain relevant information from the parties being regulated through a process of delegation. A fundamental tension underlying delegation is the fact that the agent's interests will typically differ -- potentially significantly -- from the interests of the principal, and as a result the agent may propose solutions based on their own incentives that are inefficient for the principal. A basic problem, therefore, is to design mechanisms by which the principal can constrain the set of proposals they are willing to accept from the agent, to ensure a certain level of quality for the principal from the proposed solution. In this work, we investigate how much the principal loses -- quantitatively, in terms of the objective they are trying to optimize -- when they delegate to an agent. We develop a methodology for bounding this loss of efficiency, and show that in a very general model of delegation, there is a family of mechanisms achieving a universal bound on the ratio between the quality of the solution obtained through delegation and the quality the principal could obtain in an idealized benchmark where they searched for a solution themself. Moreover, it is possible to achieve such bounds through mechanisms with a natural threshold structure, which are thus structurally simpler than the optimal mechanisms typically considered in the literature on delegation. At the heart of our framework is an unexpected connection between delegation and the analysis of prophet inequalities, which we leverage to provide bounds on the behavior of our delegation mechanisms.
more »
« less
- Award ID(s):
- 1741441
- PAR ID:
- 10075257
- Date Published:
- Journal Name:
- Proceedings of the 2018 ACM Conference on Economics and Computation
- Page Range / eLocation ID:
- 287--302
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We present a study on a repeated delegated choice problem, which is the first to consider an online learning variant of Kleinberg and Kleinberg, EC'18. In this model, a principal interacts repeatedly with an agent who possesses an exogenous set of solutions to search for efficient ones. Each solution can yield varying utility for both the principal and the agent, and the agent may propose a solution to maximize its own utility in a selfish manner. To mitigate this behavior, the principal announces an eligible set which screens out a certain set of solutions. The principal, however, does not have any information on the distribution of solutions nor the number of solutions in advance. Therefore, the principal dynamically announces various eligible sets to efficiently learn the distribution. The principal's objective is to minimize cumulative regret compared to the optimal eligible set in hindsight. We explore two dimensions of the problem setup, whether the agent behaves myopically or strategizes across the rounds, and whether the solutions yield deterministic or stochastic utility. We obtain sublinear regret upper bounds in various regimes, and derive corresponding lower bounds which implies the tightness of the results. Overall, we bridge a well-known problem in economics to the evolving area of online learning, and present a comprehensive study in this problem.more » « less
-
Woodruff, D (Ed.)This paper studies delegation in a model of discrete choice. In the delegation problem, an uninformed principal must consult an informed agent to make a decision. Both the agent and principal have preferences over the decided-upon action which vary based on the state of the world, and which may not be aligned. The principal may commit to a mechanism, which maps reports of the agent to actions. When this mechanism is deterministic, it can take the form of a menu of actions, from which the agent simply chooses upon observing the state. In this case, the principal is said to have delegated the choice of action to the agent. We consider a setting where the decision being delegated is a choice of a utility-maximizing action from a set of several options. We assume the shared portion of the agent's and principal's utilities is drawn from a distribution known to the principal, and that utility misalignment takes the form of a known bias for or against each action. We provide tight approximation analyses for simple threshold policies under three increasingly general sets of assumptions. With independently-distributed utilities, we prove a 3-approximation. When the agent has an outside option the principal cannot rule out, the constant-approximation fails, but we prove a log ρ/ log log ρ-approximation, where ρ is the ratio of the maximum value to the optimal utility. We also give a weaker but tight bound that holds for correlated values, and complement our upper bounds with hardness results. One special case of our model is utility-based assortment optimization, for which our results are new.more » « less
-
This paper studies delegation in a model of discrete choice. In the delegation problem, an uninformed principal must consult an informed agent to make a decision. Both the agent and principal have preferences over the decided-upon action which vary based on the state of the world, and which may not be aligned. The principal may commit to a mechanism, which maps reports of the agent to actions. When this mechanism is deterministic, it can take the form of a menu of actions, from which the agent simply chooses upon observing the state. In this case, the principal is said to have delegated the choice of action to the agent. We consider a setting where the decision being delegated is a choice of a utility-maximizing action from a set of several options. We assume the shared portion of the agent's and principal's utilities is drawn from a distribution known to the principal, and that utility misalignment takes the form of a known bias for or against each action. We provide tight approximation analyses for simple threshold policies under three increasingly general sets of assumptions. With independently-distributed utilities, we prove a 3-approximation. When the agent has an outside option the principal cannot rule out, the constant-approximation fails, but we prove a log ρ/ log log ρ-approximation, where ρ is the ratio of the maximum value to the optimal utility. We also give a weaker but tight bound that holds for correlated values, and complement our upper bounds with hardness results. One special case of our model is utility-based assortment optimization, for which our results are new.more » « less
-
Goal management in autonomous agents has been a problem of interest for a long time. Multiple goal operations are required to solve an agent goal management problem. For example, some goal operations include selection, change, formulation, delegation, monitoring. Many researchers from different fields developed several solution approaches with an implicit or explicit focus on goal operations. For example, some solution approaches include scheduling the agents’ goals, performing cost-benefit analysis to select/organize goals, agent goal formulation in unexpected situations. However, none of them explicitly shed light on the agents’ response when multiple goal operations occur simultaneously. This paper develops an algorithm to address agent goal management when multiple-goal operations co-occur and presents how such an interaction would improve agent goal management in different domains.more » « less
An official website of the United States government

