This paper studies delegation in a model of discrete choice. In the delegation problem, an uninformed principal must consult an informed agent to make a decision. Both the agent and principal have preferences over the decided-upon action which vary based on the state of the world, and which may not be aligned. The principal may commit to a mechanism, which maps reports of the agent to actions. When this mechanism is deterministic, it can take the form of a menu of actions, from which the agent simply chooses upon observing the state. In this case, the principal is said to have delegated the choice of action to the agent. We consider a setting where the decision being delegated is a choice of a utility-maximizing action from a set of several options. We assume the shared portion of the agent's and principal's utilities is drawn from a distribution known to the principal, and that utility misalignment takes the form of a known bias for or against each action. We provide tight approximation analyses for simple threshold policies under three increasingly general sets of assumptions. With independently-distributed utilities, we prove a 3-approximation. When the agent has an outside option the principal cannot rule out, the constant-approximation fails, but we prove a log ρ/ log log ρ-approximation, where ρ is the ratio of the maximum value to the optimal utility. We also give a weaker but tight bound that holds for correlated values, and complement our upper bounds with hardness results. One special case of our model is utility-based assortment optimization, for which our results are new. 
                        more » 
                        « less   
                    
                            
                            Simple Delegated Choice
                        
                    
    
            This paper studies delegation in a model of discrete choice. In the delegation problem, an uninformed principal must consult an informed agent to make a decision. Both the agent and principal have preferences over the decided-upon action which vary based on the state of the world, and which may not be aligned. The principal may commit to a mechanism, which maps reports of the agent to actions. When this mechanism is deterministic, it can take the form of a menu of actions, from which the agent simply chooses upon observing the state. In this case, the principal is said to have delegated the choice of action to the agent. We consider a setting where the decision being delegated is a choice of a utility-maximizing action from a set of several options. We assume the shared portion of the agent's and principal's utilities is drawn from a distribution known to the principal, and that utility misalignment takes the form of a known bias for or against each action. We provide tight approximation analyses for simple threshold policies under three increasingly general sets of assumptions. With independently-distributed utilities, we prove a 3-approximation. When the agent has an outside option the principal cannot rule out, the constant-approximation fails, but we prove a log ρ/ log log ρ-approximation, where ρ is the ratio of the maximum value to the optimal utility. We also give a weaker but tight bound that holds for correlated values, and complement our upper bounds with hardness results. One special case of our model is utility-based assortment optimization, for which our results are new. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2218814
- PAR ID:
- 10548711
- Editor(s):
- Woodruff, D
- Publisher / Repository:
- SIAM
- Date Published:
- Journal Name:
- LivingOnTrack
- ISSN:
- 1555-9467
- Format(s):
- Medium: X
- Location:
- Alexandria, Virginia
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Consider a principal who wants to search through a space of stochastic solutions for one maximizing their utility. If the principal cannot conduct this search on their own, they may instead delegate this problem to an agent with distinct and potentially misaligned utilities. This is called delegated search, and the principal in such problems faces a mechanism design problem in which they must incentivize the agent to find and propose a solution maximizing the principal's expected utility. Following prior work in this area, we consider mechanisms without payments and aim to achieve a multiplicative approximation of the principal's utility when they solve the problem without delegation. In this work, we investigate a natural and recently studied generalization of this model to multiple agents and find nearly tight bounds on the principal's approximation as the number of agents increases. As one might expect, this approximation approaches 1 with increasing numbers of agents, but, somewhat surprisingly, we show that this is largely not due to direct competition among agents.more » « less
- 
            Pennock, David M.; Segal, Ilya; Seuken, Sven (Ed.)We consider the problem of planning with participation constraints introduced in [24]. In this problem, a principal chooses actions in a Markov decision process, resulting in separate utilities for the principal and the agent. However, the agent can and will choose to end the process whenever his expected onward utility becomes negative. The principal seeks to compute and commit to a policy that maximizes her expected utility, under the constraint that the agent should always want to continue participating. We provide the first polynomial-time exact algorithm for this problem for finite-horizon settings, where previously only an additive ε-approximation algorithm was known. Our approach can also be extended to the (discounted) infinite-horizon case, for which we give an algorithm that runs in time polynomial in the size of the input and log(1/ε), and returns a policy that is optimal up to an additive error of ε.more » « less
- 
            Guruswami, Venkatesan (Ed.)We study two recent combinatorial contract design models, which highlight different sources of complexity that may arise in contract design, where a principal delegates the execution of a costly project to others. In both settings, the principal cannot observe the choices of the agent(s), only the project’s outcome (success or failure), and incentivizes the agent(s) using a contract, a payment scheme that specifies the payment to the agent(s) upon a project’s success. We present results that resolve open problems and advance our understanding of the computational complexity of both settings. In the multi-agent setting, the project is delegated to a team of agents, where each agent chooses whether or not to exert effort. A success probability function maps any subset of agents who exert effort to a probability of the project’s success. For the family of submodular success probability functions, Dütting et al. [2023] established a poly-time constant factor approximation to the optimal contract, and left open whether this problem admits a PTAS. We answer this question on the negative, by showing that no poly-time algorithm guarantees a better than 0.7-approximation to the optimal contract. For XOS functions, they give a poly-time constant approximation with value and demand queries. We show that with value queries only, one cannot get any constant approximation. In the multi-action setting, the project is delegated to a single agent, who can take any subset of a given set of actions. Here, a success probability function maps any subset of actions to a probability of the project’s success. Dütting et al. [2021a] showed a poly-time algorithm for computing an optimal contract for gross substitutes success probability functions, and showed that the problem is NP-hard for submodular functions. We further strengthen this hardness result by showing that this problem does not admit any constant factor approximation. Furthermore, for the broader class of XOS functions, we establish the hardness of obtaining a n^{-1/2+ε}-approximation for any ε > 0.more » « less
- 
            We study a variant of the principal-agent problem in which the principal does not directly observe the outcomes; rather, she gets a signal related to the agent’s action, according to a variable information structure. We provide simple necessary and sufficient conditions for implementability of an action and a utility profile by some information structure and the corresponding optimal contract — for a riskneutral or risk-averse agent, with or without the limited liability assumption. It turns out that the set of implementable utility profiles is characterized by simple thresholds on the utilities.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    