skip to main content


This content will become publicly available on July 15, 2025

Title: Information Design in the Principal-Agent Problem
We study a variant of the principal-agent problem in which the principal does not directly observe the outcomes; rather, she gets a signal related to the agent’s action, according to a variable information structure. We provide simple necessary and sufficient conditions for implementability of an action and a utility profile by some information structure and the corresponding optimal contract — for a riskneutral or risk-averse agent, with or without the limited liability assumption. It turns out that the set of implementable utility profiles is characterized by simple thresholds on the utilities.  more » « less
Award ID(s):
2303372
PAR ID:
10532235
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Proceedings of 25th ACM Conference on Economics and Computation, EC 2024
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper studies delegation in a model of discrete choice. In the delegation problem, an uninformed principal must consult an informed agent to make a decision. Both the agent and principal have preferences over the decided-upon action which vary based on the state of the world, and which may not be aligned. The principal may commit to a mechanism, which maps reports of the agent to actions. When this mechanism is deterministic, it can take the form of a menu of actions, from which the agent simply chooses upon observing the state. In this case, the principal is said to have delegated the choice of action to the agent. We consider a setting where the decision being delegated is a choice of a utility-maximizing action from a set of several options. We assume the shared portion of the agent's and principal's utilities is drawn from a distribution known to the principal, and that utility misalignment takes the form of a known bias for or against each action. We provide tight approximation analyses for simple threshold policies under three increasingly general sets of assumptions. With independently-distributed utilities, we prove a 3-approximation. When the agent has an outside option the principal cannot rule out, the constant-approximation fails, but we prove a log ρ/ log log ρ-approximation, where ρ is the ratio of the maximum value to the optimal utility. We also give a weaker but tight bound that holds for correlated values, and complement our upper bounds with hardness results. One special case of our model is utility-based assortment optimization, for which our results are new. 
    more » « less
  2. Woodruff, D (Ed.)
    This paper studies delegation in a model of discrete choice. In the delegation problem, an uninformed principal must consult an informed agent to make a decision. Both the agent and principal have preferences over the decided-upon action which vary based on the state of the world, and which may not be aligned. The principal may commit to a mechanism, which maps reports of the agent to actions. When this mechanism is deterministic, it can take the form of a menu of actions, from which the agent simply chooses upon observing the state. In this case, the principal is said to have delegated the choice of action to the agent. We consider a setting where the decision being delegated is a choice of a utility-maximizing action from a set of several options. We assume the shared portion of the agent's and principal's utilities is drawn from a distribution known to the principal, and that utility misalignment takes the form of a known bias for or against each action. We provide tight approximation analyses for simple threshold policies under three increasingly general sets of assumptions. With independently-distributed utilities, we prove a 3-approximation. When the agent has an outside option the principal cannot rule out, the constant-approximation fails, but we prove a log ρ/ log log ρ-approximation, where ρ is the ratio of the maximum value to the optimal utility. We also give a weaker but tight bound that holds for correlated values, and complement our upper bounds with hardness results. One special case of our model is utility-based assortment optimization, for which our results are new. 
    more » « less
  3. We consider a game in which one player (the principal) seeks to incentivize another player (the agent) to exert effort that is costly to the agent. Any effort exerted leads to an outcome that is a stochastic function of the effort. The amount of effort exerted by the agent is private information for the agent and the principal observes only the outcome; thus, the agent can misreport his effort to gain higher payment. Further, the cost function of the agent is also unknown to the principal and the agent can also misreport a higher cost function to gain higher payment for the same effort. We pose the problem as one of contract design when both adverse selection and moral hazard are present. We show that if the principal and agent interact only finitely many times, it is always possible for the agent to lie due to the asymmetric information pattern and claim a higher payment than if he were unable to lie. However, if the principal and agent interact infinitely many times, then the principal can utilize the observed outcomes to update the contract in a manner that reveals the private cost function of the agent and hence leads to the agent not being able to derive any rent. The result can also be interpreted as saying that the agent is unable to keep his information private if he interacts with the principal sufficiently often. 
    more » « less
  4. We consider information design in spatial resource competition, motivated by ride sharing platforms sharing information with drivers about rider demand. Each of N co-located agents (drivers) decides whether to move to another location with an uncertain and possibly higher resource level (rider demand), where the utility for moving increases in the resource level and decreases in the number of other agents that move. A principal who can observe the resource level wishes to share this information in a way that ensures a welfare-maximizing number of agents move. Analyzing the principal’s information design problem using the Bayesian persuasion framework, we study both private signaling mechanisms, where the principal sends personalized signals to each agent, and public signaling mechanisms, where the principal sends the same information to all agents. We show: 1) For private signaling, computing the optimal mechanism using the standard approach leads to a linear program with 2 N variables, rendering the computation challenging. We instead describe a computationally efficient two-step approach to finding the optimal private signaling mechanism. First, we perform a change of variables to solve a linear program with O(N^2) variables that provides the marginal probabilities of recommending each agent move. Second, we describe an efficient sampling procedure over sets of agents consistent with these optimal marginal probabilities; the optimal private mechanism then asks the sampled set of agents to move and the rest to stay. 2) For public signaling, we first show the welfare-maximizing equilibrium given any common belief has a threshold structure. Using this, we show that the optimal public mechanism with respect to the sender-preferred equilibrium can be computed in polynomial time. 3) We support our analytical results with numerical computations that show the optimal private and public signaling mechanisms achieve substantially higher social welfare when compared with no-information and full-information benchmarks. 
    more » « less
  5. Pennock, David M. ; Segal, Ilya ; Seuken, Sven (Ed.)
    We consider the problem of planning with participation constraints introduced in [24]. In this problem, a principal chooses actions in a Markov decision process, resulting in separate utilities for the principal and the agent. However, the agent can and will choose to end the process whenever his expected onward utility becomes negative. The principal seeks to compute and commit to a policy that maximizes her expected utility, under the constraint that the agent should always want to continue participating. We provide the first polynomial-time exact algorithm for this problem for finite-horizon settings, where previously only an additive ε-approximation algorithm was known. Our approach can also be extended to the (discounted) infinite-horizon case, for which we give an algorithm that runs in time polynomial in the size of the input and log(1/ε), and returns a policy that is optimal up to an additive error of ε. 
    more » « less