Deception is a crucial tool in the cyberdefence repertoire, enabling defenders to leverage their informational advantage to reduce the likelihood of successful attacks. One way deception can be employed is through obscuring, or masking, some of the information about how systems are configured, increasing attacker’s uncertainty about their tar-gets. We present a novel game-theoretic model of the resulting defender- attacker interaction, where the defender chooses a subset of attributes to mask, while the attacker responds by choosing an exploit to execute. The strategies of both players have combinatorial structure with complex informational dependencies, and therefore even representing these strategies is not trivial. First, we show that the problem of computing an equilibrium of the resulting zero-sum defender-attacker game can be represented as a linear program with a combinatorial number of system configuration variables and constraints, and develop a constraint generation approach for solving this problem. Next, we present a novel highly scalable approach for approximately solving such games by representing the strategies of both players as neural networks. The key idea is to represent the defender’s mixed strategy using a deep neural network generator, and then using alternating gradient-descent-ascent algorithm, analogous to the training of Generative Adversarial Networks. Our experiments, as well as a case study, demonstrate the efficacy of the proposed approach. 
                        more » 
                        « less   
                    
                            
                            It Takes Two to Lie: One to Lie, and One to Listen
                        
                    
    
            Trust is implicit in many online text conversations—striking up new friendships, or asking for tech support. But trust can be betrayed through deception. We study the language and dynamics of deception in the negotiation-based game Diplomacy, where seven players compete for world domination by forging and breaking alliances with each other. Our study with players from the Diplomacy community gathers 17,289 messages annotated by the sender for their intended truthfulness and by the receiver for their perceived truthfulness. Unlike existing datasets, this captures deception in long-lasting relationships, where the interlocutors strategically combine truth with lies to advance objectives. A model that uses power dynamics and conversational contexts can predict when a lie occurs nearly as well as human players. 
        more » 
        « less   
        
    
    
                            - PAR ID:
- 10176522
- Date Published:
- Journal Name:
- Proceedings of ACL
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            In recent years, there has been a growing interest in and focus on the automatic detection of deceptive behavior. This attention is justified by the wide range of applications that deception detection can have, especially in fields such as criminology. This study specifically aims to contribute to the field of deception detection by capturing transcribed data, analyzing textual data using Natural Language Processing (NLP) techniques, and comparing the performance of conventional models using linguistic features with the performance of Large Language Models (LLMs). In addition, the significance of applied linguistic features has been examined using different feature selection techniques. Through extensive experiments, we evaluated the effectiveness of both conventional and deep NLP models in detecting deception from speech. Applying different models to the Real-Life Trial dataset, a single layer of Bidirectional Long Short-Term Memory (BiLSTM) tuned by early stopping outperformed the other models. This model achieved an accuracy of 93.57% and an F1 score of 94.48%.more » « less
- 
            We investigate the transient and steady-state dynamics of the Bennati-Dragulescu-Yakovenko money game in the presence of probabilistic cheaters, who can misrepresent their financial status by claiming to have no money. We derive the steady-state wealth distribution per player analytically, and we show how the presence of hidden cheaters can be inferred from the relative variance of wealth per player. In scenarios with a finite number of cheaters amidst an infinite pool of honest players, we identify a critical probability of cheating at which the total wealth owned by the cheaters experiences a second-order discontinuity. Below this point, the transition probability to lose money is larger than the probability to gain; conversely, above this point, the direction is reversed. We further establish a threshold cheating probability at which cheaters collectively possess half of the total wealth in the game. Lastly, we provide bounds on the rate at which both cheaters and honest players can gain or lose wealth, contributing to a deeper understanding of deception in asset-exchange models.more » « less
- 
            Trust, dependability, cohesion, and capability are integral to an effective team. These attributes are the same for teams of robots. When multiple teams with competing incentives are tasked, a strategy, if available, may be to weaken, influence or sway the attributes of other teams and limit their understanding of their full range of options. Such strategies are widely found in nature and in sporting contests such as feints, misdirection, etc. This talk focuses on one class of higher-level strategies for multi-robots, i.e., to intentionally misdirect using shills or confederates where needed, and the ethical considerations associated with deploying such teams. As multi-robot systems become more autonomous, distributed, networked, numerous, and with more capability to make critical decisions, the prospect for intentional and unintentional misdirection must be anticipated. While benefits are clearly apparent to the team performing the deception, ethical questions surrounding the use of misdirection or other forms of deception are quite real.more » « less
- 
            Deception has been proposed in the literature as an effective defense mechanism to address Advanced Persistent Threats (APT). However, administering deception in a cost-effective manner requires a good understanding of the attack landscape. The attacks mounted by APT groups are highly diverse and sophisticated in nature and can render traditional signature based intrusion detection systems useless. This necessitates the development of behavior oriented defense mechanisms. In this paper, we develop Decepticon (Deception-based countermeasure) a Hidden Markov Model based framework where the indicators of compromise (IoC) are used as the observable features to aid in detection. This framework would help in selecting an appropriate deception script when faced with APTs or other similar malware and trigger an appropriate defensive response. The effectiveness of the model and the associated framework is demonstrated by considering ransomware as the offending APT in a networked system.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    