PurposeThis study aimed to investigate how honest participants perceived an attacker to be during shoulder surfing scenarios that varied in terms of which Principle of Persuasion in Social Engineering (PPSE) was used, whether perceived honesty changed as scenarios progressed, and whether any changes were greater in some scenarios than others. Design/methodology/approachParticipants read one of six shoulder surfing scenarios. Five depicted an attacker using one of the PPSEs. The other depicted an attacker using as few PPSEs as possible, which served as a control condition. Participants then rated perceived attacker honesty. FindingsThe results revealed honesty ratings in each condition were equal during the beginning of the conversation, participants in each condition perceived the attacker to be honest during the beginning of the conversation, perceived attacker honesty declined when the attacker requested the target perform an action that would afford shoulder surfing, perceived attacker honesty declined more when the Distraction and Social Proof PPSEs were used, participants perceived the attacker to be dishonest when making such requests using the Distraction and Social Proof PPSEs and perceived attacker honesty did not change when the attacker used the target’s computer. Originality/valueTo the best of the authors’ knowledge, this experiment is the first to investigate how persuasion tactics affect perceptions of attackers during shoulder surfing attacks. These results have important implications for shoulder surfing prevention training programs and penetration tests.
more »
« less
How Perceptions of Caller Honesty Vary During Vishing Attacks That Include Highly Sensitive or Seemingly Innocuous Requests
Objective To understand how aspects of vishing calls (phishing phone calls) influence perceived visher honesty. Background Little is understood about how targeted individuals behave during vishing attacks. According to truth-default theory, people assume others are being honest until something triggers their suspicion. We investigated whether that was true during vishing attacks. Methods Twenty-four participants read written descriptions of eight real-world vishing calls. Half included highly sensitive requests; the remainder included seemingly innocuous requests. Participants rated visher honesty at multiple points during conversations. Results Participants initially perceived vishers to be honest. Honesty ratings decreased before requests occurred. Honesty ratings decreased further in response to highly sensitive requests, but not seemingly innocuous requests. Honesty ratings recovered somewhat, but only after highly sensitive requests. Conclusions The present results revealed five important insights: (1) people begin vishing conversations in the truth-default state, (2) certain aspects of vishing conversations serve as triggers, (3) other aspects of vishing conversations do not serve as triggers, (4) in certain situations, people’s perceptions of visher honesty improve, and, more generally, (5) truth-default theory may be a useful tool for understanding how targeted individuals behave during vishing attacks. Application Those developing systems that help users deal with suspected vishing attacks or penetration testing plans should consider (1) targeted individuals’ truth-bias, (2) the influence of visher demeanor on the likelihood of deception detection, (3) the influence of fabricated situations surrounding vishing requests on the likelihood of deception detection, and (4) targeted individuals’ lack of concern about seemingly innocuous requests.
more »
« less
- Award ID(s):
- 1723765
- PAR ID:
- 10281803
- Date Published:
- Journal Name:
- Human Factors: The Journal of the Human Factors and Ergonomics Society
- ISSN:
- 0018-7208
- Page Range / eLocation ID:
- 001872082110128
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)We examined how robots can successfully serve as moral advisors for humans. We evaluated the effectiveness of moral advice grounded in deontological, virtue, and Confucian role ethics frameworks in encouraging humans to make honest decisions. Participants were introduced to a tempting situation where extra monetary gain could be earned by choosing to cheat (i.e., violating the norm of honesty). Prior to their decision, a robot encouraged honest choices by offering a piece of moral advice grounded in one of the three ethics frameworks. While the robot’s advice was overall not effective at discouraging dishonest choices, there was preliminary evidence indicating the relative effectiveness of moral advice drawn from deontology. We also explored how different cultural orientations (i.e., vertical and horizontal collectivism and individualism) influence honest decisions across differentially-framed moral advice. We found that individuals with a strong cultural orientation of establishing their own power and status through competition (i.e., high vertical individualism) were more likely to make dishonest choices, especially when moral advice was drawn from virtue ethics. Our findings suggest the importance of considering different ethical frameworks and cultural differences to design robots that can guide humans to comply with the norm of honesty.more » « less
-
Trust is implicit in many online text conversations—striking up new friendships, or asking for tech support. But trust can be betrayed through deception. We study the language and dynamics of deception in the negotiation-based game Diplomacy, where seven players compete for world domination by forging and breaking alliances with each other. Our study with players from the Diplomacy community gathers 17,289 messages annotated by the sender for their intended truthfulness and by the receiver for their perceived truthfulness. Unlike existing datasets, this captures deception in long-lasting relationships, where the interlocutors strategically combine truth with lies to advance objectives. A model that uses power dynamics and conversational contexts can predict when a lie occurs nearly as well as human players.more » « less
-
Meta-analyses have not shown emotions to be significant predictors of deception. Criticisms of this conclusion argued that individuals must be engaged with each other in higher stake situations for such emotions to manifest, and that these emotions must be evaluated in their verbal context (Frank and Svetieva in J Appl Res Memory Cognit 1:131–133, 10.1016/j.jarmac.2012.04.006, 2012). This study examined behavioral synchrony as a marker of engagement in higher stakes truthful and deceptive interactions, and then compared the differences in facial expressions of fear, contempt, disgust, anger, and sadness not consistent with the verbal content. Forty-eight pairs of participants were randomly assigned to interviewer and interviewee, and the interviewee was assigned to steal either a watch or a ring and to lie about the item they stole, and tell the truth about the other, under conditions of higher stakes of up to $30 rewards for successful deception, and $0 plus having to write a 15-min essay for unsuccessful deception. The interviews were coded for expression of emotions using EMFACS (Friesen and Ekman in EMFACS-7; emotional facial action coding system, 1984). Synchrony was demonstrated by the pairs of participants expressing overlapping instances of happiness (AU6 + 12). A 3 (low, moderate, high synchrony) × 2 (truth, lie) mixed-design ANOVA found that negative facial expressions of emotion were a significant predictor of deception, but only when they were not consistent with the verbal content, in the moderate and high synchrony conditions. This finding is consistent with data and theorizing that shows that with higher stakes, or with higher engagement, emotions can be a predictor of deception.more » « less
-
Recent research has used virtual environments (VEs), as presented via virtual reality (VR) headsets, to study human behavior in hypothetical fire scenarios. One goal of using VEs in fire scenarios is to elicit patterns of behavior which more closely align to how individuals would react to real fire emergency situations. The present study investigated whether elicited behaviors and perceived risk varied during fire scenarios presented as VEs via two viewing conditions. These included a VR condition, where the VE was rendered as 360-degree videos presented in a VR headset, and a screen condition, where VEs were rendered as fixed-view videos via a computer monitor screen. We predicted that the selection of actions during the scenario would vary between conditions, that participants would rate fires as more dangerous if they developed more quickly and when smoke was rendered as thicker, and that participants would report greater levels of immersion in the VR condition. A total of 159 participants completed a decision-making task where they viewed videos of an incipient fire in a residential building and judged what action to take. Initial action responses to the fire scenarios varied between both viewing and smoke conditions, with those assigned to the thicker smoke and screen conditions being more likely to take protective action. Risk ratings also varied by smoke condition, with evidence of higher perceived risk for thicker smoke. Several factors of self-reported immersion (namely ‘interest’, ‘emotional attachment’, ‘focus of attention’, and ‘flow’) were associated with risk ratings, with perceived presence associated with initial actions. The present study provides evidence that enhancing immersion and perceived risk in a VE contributes to a different pattern of behaviors during simulated fire decision-making tasks. While our investigation only addressed the ideas of presence in an environment, future research should investigate the relative contribution of interactivity and consequences within the environment to further identify how behaviors during simulated fire scenarios are affected by each of these factors.more » « less
An official website of the United States government

