skip to main content


Title: It Takes Two to Lie: One to Lie, and One to Listen
Trust is implicit in many online text conversations—striking up new friendships, or asking for tech support. But trust can be betrayed through deception. We study the language and dynamics of deception in the negotiation-based game Diplomacy, where seven players compete for world domination by forging and breaking alliances with each other. Our study with players from the Diplomacy community gathers 17,289 messages annotated by the sender for their intended truthfulness and by the receiver for their perceived truthfulness. Unlike existing datasets, this captures deception in long-lasting relationships, where the interlocutors strategically combine truth with lies to advance objectives. A model that uses power dynamics and conversational contexts can predict when a lie occurs nearly as well as human players.  more » « less
Award ID(s):
1750615 1910147
NSF-PAR ID:
10176522
Author(s) / Creator(s):
Date Published:
Journal Name:
Proceedings of ACL
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Deception is a crucial tool in the cyberdefence repertoire, enabling defenders to leverage their informational advantage to reduce the likelihood of successful attacks. One way deception can be employed is through obscuring, or masking, some of the information about how systems are configured, increasing attacker’s uncertainty about their tar-gets. We present a novel game-theoretic model of the resulting defender- attacker interaction, where the defender chooses a subset of attributes to mask, while the attacker responds by choosing an exploit to execute. The strategies of both players have combinatorial structure with complex informational dependencies, and therefore even representing these strategies is not trivial. First, we show that the problem of computing an equilibrium of the resulting zero-sum defender-attacker game can be represented as a linear program with a combinatorial number of system configuration variables and constraints, and develop a constraint generation approach for solving this problem. Next, we present a novel highly scalable approach for approximately solving such games by representing the strategies of both players as neural networks. The key idea is to represent the defender’s mixed strategy using a deep neural network generator, and then using alternating gradient-descent-ascent algorithm, analogous to the training of Generative Adversarial Networks. Our experiments, as well as a case study, demonstrate the efficacy of the proposed approach. 
    more » « less
  2. null (Ed.)
    Motivated by kidney exchange, we study the following mechanism-design problem: On a directed graph (of transplant compatibilities among patient--donor pairs), the mechanism must select a simple path (a chain of transplantations) starting at a distinguished vertex (an altruistic donor) such that the total length of this path is as large as possible (a maximum number of patients receive a kidney). However, the mechanism does not have direct access to the graph. Instead, the vertices are partitioned over multiple players (hospitals), and each player reports a subset of her vertices to the mechanism. In particular, a player may strategically omit vertices to increase how many of her vertices lie on the path returned by the mechanism. Our objective is to find mechanisms that limit incentives for such manipulation while producing long paths. Unfortunately, in worst-case instances, competing with the overall longest path is impossible while incentivizing (approximate) truthfulness, i.e., requiring that hiding nodes cannot increase a player's utility by more than a factor of 1 + o(1). We therefore adopt a semi-random model where o(n) random edges are added to worst-case instances. While it remains impossible for truthful mechanisms to compete with the overall longest path, we give a truthful mechanism that competes with a weaker but non-trivial benchmark: the length of any path whose subpaths within each player have a minimum average length. In fact, our mechanism satisfies even a stronger notion of truthfulness, which we call matching-time incentive compatibility. This notion of truthfulness requires that each player not only reports her nodes truthfully but also does not stop the returned path at any of her nodes in order to divert it to a continuation inside her own subgraph. 
    more » « less
  3. Misinformation runs rampant on social media and has been tied to adverse health behaviors such as vaccine hesitancy. Crowdsourcing can be a means to detect and impede the spread of misinformation online. However, past studies have not deeply examined the individual characteristics - such as cognitive factors and biases - that predict crowdworker accuracy at identifying misinformation. In our study (n = 265), Amazon Mechanical Turk (MTurk) workers and university students assessed the truthfulness and sentiment of COVID-19 related tweets as well as answered several surveys on personal characteristics. Results support the viability of crowdsourcing for assessing misinformation and content stance (i.e., sentiment) related to ongoing and politically-charged topics like the COVID-19 pandemic, however, alignment with experts depends on who is in the crowd. Specifically, we find that respondents with high Cognitive Reflection Test (CRT) scores, conscientiousness, and trust in medical scientists are more aligned with experts while respondents with high Need for Cognitive Closure (NFCC) and those who lean politically conservative are less aligned with experts. We see differences between recruitment platforms as well, as our data shows university students are on average more aligned with experts than MTurk workers, most likely due to overall differences in participant characteristics on each platform. Results offer transparency into how crowd composition affects misinformation and stance assessment and have implications on future crowd recruitment and filtering practices. 
    more » « less
  4. Trust, dependability, cohesion, and capability are integral to an effective team. These attributes are the same for teams of robots. When multiple teams with competing incentives are tasked, a strategy, if available, may be to weaken, influence or sway the attributes of other teams and limit their understanding of their full range of options. Such strategies are widely found in nature and in sporting contests such as feints, misdirection, etc. This talk focuses on one class of higher-level strategies for multi-robots, i.e., to intentionally misdirect using shills or confederates where needed, and the ethical considerations associated with deploying such teams. As multi-robot systems become more autonomous, distributed, networked, numerous, and with more capability to make critical decisions, the prospect for intentional and unintentional misdirection must be anticipated. While benefits are clearly apparent to the team performing the deception, ethical questions surrounding the use of misdirection or other forms of deception are quite real. 
    more » « less
  5. The growing adoption of digital assets---including but not limited to cryptocurrencies, tokens, and even identities---calls for secure and robust digital assets custody. A common way to distribute the ownership of a digital asset is (M, N)-threshold access structures. However, traditional access structures leave users with a painful choice. Setting M = N seems attractive as it offers maximum resistance to share compromise, but it also causes maximum brittleness: A single lost share renders the asset permanently frozen, inducing paralysis. Lowering M improves availability, but degrades security. In this paper, we introduce techniques that address this impasse by making general cryptographic access structures dynamic. The core idea is what we call Paralysis Proofs, evidence that players or shares are provably unavailable. Using Paralysis Proofs, we show how to construct a Dynamic Access Structure System (DASS), which can securely and flexibly update target access structures without a trusted third party. We present DASS constructions that combine a trust anchor (a trusted execution environment or smart contract) with a censorship-resistant channel in the form of a blockchain. We offer a formal framework for specifying DASS policies, and show how to achieve critical security and usability properties (safety, liveness, and paralysis-freeness) in a DASS. To illustrate the wide range of applications, we present three use cases of DASSes for improving digital asset custody: a multi-signature scheme that can "downgrade" the threshold should players become unavailable; a hybrid scheme where the centralized custodian can't refuse service; and a smart-contract-based scheme that supports recovery from unexpected bugs. 
    more » « less