skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Too Good to Be True? Retention Rules for Noisy Agents
An agent who privately knows his type seeks to be retained by a principal. Agents signal their type with some ambient noise, but can alter this noise, perhaps at some cost. Our main finding is that in equilibrium, the principal treats extreme signals in either direction with suspicion, and retains the agent if and only if the signal falls in some intermediate bounded set. In short, she follows the maxim: “if it seems too good to be true, it probably is.” We consider extensions and applications, including non-normal signal structures, dynamics with term limits, risky portfolio management, and political risk-taking. (JEL D72, D82, G11, G41)  more » « less
Award ID(s):
1851758
PAR ID:
10503301
Author(s) / Creator(s):
;
Publisher / Repository:
American Economic Association
Date Published:
Journal Name:
American Economic Journal: Microeconomics
Volume:
15
Issue:
2
ISSN:
1945-7669
Page Range / eLocation ID:
493-535
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider a game in which one player (the principal) seeks to incentivize another player (the agent) to exert effort that is costly to the agent. Any effort exerted leads to an outcome that is a stochastic function of the effort. The amount of effort exerted by the agent is private information for the agent and the principal observes only the outcome; thus, the agent can misreport his effort to gain higher payment. Further, the cost function of the agent is also unknown to the principal and the agent can also misreport a higher cost function to gain higher payment for the same effort. We pose the problem as one of contract design when both adverse selection and moral hazard are present. We show that if the principal and agent interact only finitely many times, it is always possible for the agent to lie due to the asymmetric information pattern and claim a higher payment than if he were unable to lie. However, if the principal and agent interact infinitely many times, then the principal can utilize the observed outcomes to update the contract in a manner that reveals the private cost function of the agent and hence leads to the agent not being able to derive any rent. The result can also be interpreted as saying that the agent is unable to keep his information private if he interacts with the principal sufficiently often. 
    more » « less
  2. We study a variant of the principal-agent problem in which the principal does not directly observe the outcomes; rather, she gets a signal related to the agent’s action, according to a variable information structure. We provide simple necessary and sufficient conditions for implementability of an action and a utility profile by some information structure and the corresponding optimal contract — for a riskneutral or risk-averse agent, with or without the limited liability assumption. It turns out that the set of implementable utility profiles is characterized by simple thresholds on the utilities. 
    more » « less
  3. Motivated by markets for “expertise,” we study a bandit model where a principal chooses between a safe and risky arm. A strategic agent controls the risky arm and privately knows whether its type is high or low. Irrespective of type, the agent wants to maximize duration of experimentation with the risky arm. However, only the high type arm can generate value for the principal. Our main insight is that reputational incentives can be exceedingly strong unless both players coordinate on maximally inefficient strategies on path. We discuss implications for online content markets, term limits for politicians, and experts in organizations. 
    more » « less
  4. Principal Component Analysis (PCA) is a standard dimensionality reduction technique, but it treats all samples uniformly, making it suboptimal for heterogeneous data that are increasingly common in modern settings. This paper proposes a PCA variant for samples with heterogeneous noise levels, i.e., heteroscedastic noise, that naturally arise when some of the data come from higher quality sources than others. The technique handles heteroscedasticity by incorporating it in the statistical model of a probabilistic PCA. The resulting optimization problem is an interesting nonconvex problem related to but not seemingly solved by singular value decomposition, and this paper derives an expectation maximization (EM) algorithm. Numerical experiments illustrate the benefits of using the proposed method to combine samples with heteroscedastic noise in a single analysis, as well as benefits of careful initialization for the EM algorithm. Index Terms— Principal component analysis, heterogeneous data, maximum likelihood estimation, latent factors 
    more » « less
  5. Woodruff, D (Ed.)
    This paper studies delegation in a model of discrete choice. In the delegation problem, an uninformed principal must consult an informed agent to make a decision. Both the agent and principal have preferences over the decided-upon action which vary based on the state of the world, and which may not be aligned. The principal may commit to a mechanism, which maps reports of the agent to actions. When this mechanism is deterministic, it can take the form of a menu of actions, from which the agent simply chooses upon observing the state. In this case, the principal is said to have delegated the choice of action to the agent. We consider a setting where the decision being delegated is a choice of a utility-maximizing action from a set of several options. We assume the shared portion of the agent's and principal's utilities is drawn from a distribution known to the principal, and that utility misalignment takes the form of a known bias for or against each action. We provide tight approximation analyses for simple threshold policies under three increasingly general sets of assumptions. With independently-distributed utilities, we prove a 3-approximation. When the agent has an outside option the principal cannot rule out, the constant-approximation fails, but we prove a log ρ/ log log ρ-approximation, where ρ is the ratio of the maximum value to the optimal utility. We also give a weaker but tight bound that holds for correlated values, and complement our upper bounds with hardness results. One special case of our model is utility-based assortment optimization, for which our results are new. 
    more » « less