Prediction markets are powerful tools to elicit and aggregate beliefs from strategic agents. However, in current prediction markets, agents may exhaust the social welfare by competing to be the first to update the market. We initiate the study of the trade-off between how quickly information is aggregated by the market, and how much this information costs. We design markets to aggregate timely information from strategic agents to maximize social welfare. To this end, the market must incentivize agents to invest the correct amount of effort to acquire information: quickly enough to be useful, but not faster (and more expensively) than necessary. The market also must ensure that agents report their information truthfully and on time. We consider two settings: in the first, information is only valuable before a deadline; in the second, the value of information decreases as time passes. We use both theorems and simulations to demonstrate the mechanisms.
more »
« less
The Possibilities and Limitations of Private Prediction Markets
We consider the design of private prediction markets , financial markets designed to elicit predictions about uncertain events without revealing too much information about market participants’ actions or beliefs. Our goal is to design market mechanisms in which participants’ trades or wagers influence the market’s behavior in a way that leads to accurate predictions, yet no single participant has too much influence over what others are able to observe. We study the possibilities and limitations of such mechanisms using tools from differential privacy. We begin by designing a private one-shot wagering mechanism in which bettors specify a belief about the likelihood of a future event and a corresponding monetary wager. Wagers are redistributed among bettors in a way that more highly rewards those with accurate predictions. We provide a class of wagering mechanisms that are guaranteed to satisfy truthfulness, budget balance on expectation, and other desirable properties while additionally guaranteeing ε-joint differential privacy in the bettors’ reported beliefs, and analyze the trade-off between the achievable level of privacy and the sensitivity of a bettor’s payment to her own report. We then ask whether it is possible to obtain privacy in dynamic prediction markets, focusing our attention on the popular cost-function framework in which securities with payments linked to future events are bought and sold by an automated market maker. We show that under general conditions, it is impossible for such a market maker to simultaneously achieve bounded worst-case loss and ε-differential privacy without allowing the privacy guarantee to degrade extremely quickly as the number of trades grows (at least logarithmically in number of trades), making such markets impractical in settings in which privacy is valued. We conclude by suggesting several avenues for potentially circumventing this lower bound.
more »
« less
- Award ID(s):
- 1850187
- PAR ID:
- 10223674
- Date Published:
- Journal Name:
- ACM Transactions on Economics and Computation
- Volume:
- 8
- Issue:
- 3
- ISSN:
- 2167-8375
- Page Range / eLocation ID:
- 1 to 24
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Modifying attitudes and behaviours related to climate change is difficult. Attempts to offer information, appeal to values and norms or enact policies have shown limited success. Here we examine whether participation in a climate prediction market can shift attitudes by having the market act as a non-partisan adjudicator and by prompting participants to put their ‘money where their mouth is’. Across two field studies, we show that betting on climate events alters: (1) participants’ concern about climate change, (2) support for remedial climate action and (3) knowledge about climate issues. While the effects were dependent on participants’ betting performance in Study 1, they were independent of betting outcomes in Study 2. Overall, our findings suggest that climate prediction markets could offer a promising path to changing people’s climate-related attitudes and behaviour.more » « less
-
Abstract Differential privacy is a mathematical concept that provides an information-theoretic security guarantee. While differential privacy has emerged as a de facto standard for guaranteeing privacy in data sharing, the known mechanisms to achieve it come with some serious limitations. Utility guarantees are usually provided only for a fixed, a priori specified set of queries. Moreover, there are no utility guarantees for more complex—but very common—machine learning tasks such as clustering or classification. In this paper we overcome some of these limitations. Working with metric privacy, a powerful generalization of differential privacy, we develop a polynomial-time algorithm that creates aprivate measurefrom a data set. This private measure allows us to efficiently construct private synthetic data that are accurate for a wide range of statistical analysis tools. Moreover, we prove an asymptotically sharp min-max result for private measures and synthetic data in general compact metric spaces, for any fixed privacy budget$$\varepsilon $$ bounded away from zero. A key ingredient in our construction is a newsuperregular random walk, whose joint distribution of steps is as regular as that of independent random variables, yet which deviates from the origin logarithmically slowly.more » « less
-
null (Ed.)Differential privacy has become a de facto standard for releasing data in a privacy-preserving way. Creating a differentially private algorithm is a process that often starts with a noise-free (nonprivate) algorithm. The designer then decides where to add noise, and how much of it to add. This can be a non-trivial process – if not done carefully, the algorithm might either violate differential privacy or have low utility. In this paper, we present DPGen, a program synthesizer that takes in non-private code (without any noise) and automatically synthesizes its differentially private version (with carefully calibrated noise). Under the hood, DPGen uses novel algorithms to automatically generate a sketch program with candidate locations for noise, and then optimize privacy proof and noise scales simultaneously on the sketch program. Moreover, DPGen can synthesize sophisticated mechanisms that adaptively process queries until a specified privacy budget is exhausted. When evaluated on standard benchmarks, DPGen is able to generate differentially private mechanisms that optimize simple utility functions within 120 seconds. It is also powerful enough to synthesize adaptive privacy mechanisms.more » « less
-
Differential Privacy (DP) is a mathematical definition that enshrines a formal guarantee that the output of a query does not depend greatly on any individual in the dataset. DP does not formalize a notion of "background information" and does not provide a guarantee about how much an output can be identifying to someone who has background information about an individual. In this paper, we argue that privately fine-tuning a pre-trained machine learning model on a private dataset using differential privacy does not always yield meaningful notions of privacy. Simply offering differential privacy guarantees in terms of (ε, δ) is insufficient to ensure human notions privacy, when the original training data is correlated with the fine-tuning dataset. We emphasize that, alongside differential privacy assurances, it is essential to report measures of dataset similarity and model attackability (for which model-size can be a proxy). This is a work in progress; this work is primarily a position piece, arguing for how DP should be used in practice, and what future research needs to be conducted in order to better answer those questions.more » « less
An official website of the United States government

