PurposeThis study aimed to investigate how honest participants perceived an attacker to be during shoulder surfing scenarios that varied in terms of which Principle of Persuasion in Social Engineering (PPSE) was used, whether perceived honesty changed as scenarios progressed, and whether any changes were greater in some scenarios than others. Design/methodology/approachParticipants read one of six shoulder surfing scenarios. Five depicted an attacker using one of the PPSEs. The other depicted an attacker using as few PPSEs as possible, which served as a control condition. Participants then rated perceived attacker honesty. FindingsThe results revealed honesty ratings in each condition were equal during the beginning of the conversation, participants in each condition perceived the attacker to be honest during the beginning of the conversation, perceived attacker honesty declined when the attacker requested the target perform an action that would afford shoulder surfing, perceived attacker honesty declined more when the Distraction and Social Proof PPSEs were used, participants perceived the attacker to be dishonest when making such requests using the Distraction and Social Proof PPSEs and perceived attacker honesty did not change when the attacker used the target’s computer. Originality/valueTo the best of the authors’ knowledge, this experiment is the first to investigate how persuasion tactics affect perceptions of attackers during shoulder surfing attacks. These results have important implications for shoulder surfing prevention training programs and penetration tests.
more »
« less
How social engineers use persuasion principles during vishing attacks
Purpose This study aims to examine how social engineers use persuasion principles during vishing attacks. Design/methodology/approach In total, 86 examples of real-world vishing attacks were found in articles and videos. Each example was coded to determine which persuasion principles were present in that attack and how they were implemented, i.e. what specific elements of the attack contributed to the presence of each persuasion principle. Findings Authority (A), social proof (S) and distraction (D) were the most widely used persuasion principles in vishing attacks, followed by liking, similarity and deception (L). These four persuasion principles occurred in a majority of vishing attacks, while commitment, reciprocation and consistency (C) did not. Further, certain sets of persuasion principles (i.e. authority, distraction, liking, similarity, and deception and social proof; , authority, commitment, reciprocation, and consistency, distraction, liking, similarity and deception, and social proof; and authority, distraction and social proof) were used more than others. It was noteworthy that despite their similarities, those sets of persuasion principles were implemented in different ways, and certain specific ways of implementing certain persuasion principles (e.g. vishers claiming to have authority over the victim) were quite rare. Originality/value To the best of authors’ knowledge, this study is the first to investigate how social engineers use persuasion principles during vishing attacks. As such, it provides important insight into how social engineers implement vishing attacks and lays a critical foundation for future research investigating the psychological aspects of vishing attacks. The present results have important implications for vishing countermeasures and education.
more »
« less
- Award ID(s):
- 1723765
- PAR ID:
- 10281805
- Date Published:
- Journal Name:
- Information & Computer Security
- Volume:
- ahead-of-print
- Issue:
- ahead-of-print
- ISSN:
- 2056-4961
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This is one of the first accounts for the security analysis of consumer immersive Virtual Reality (VR) systems. This work breaks new ground, coins new terms, and constructs proof of concept implementations of attacks related to immersive VR. Our work used the two most widely adopted immersive VR systems, the HTC Vive, and the Oculus Rift. More specifically, we were able to create attacks that can potentially disorient users, turn their Head Mounted Display (HMD) camera on without their knowledge, overlay images in their field of vision, and modify VR environmental factors that force them into hitting physical objects and walls. Finally, we illustrate through a human participant deception study the success of being able to exploit VR systems to control immersed users and move them to a location in physical space without their knowledge. We term this the Human Joystick Attack. We conclude our work with future research directions and ways to enhance the security of these systems.more » « less
-
One of the effective ways of detecting malicious traffic in computer networks is intrusion detection systems (IDS). Though IDS identify malicious activities in a network, it might be difficult to detect distributed or coordinated attacks because they only have single vantage point. To combat this problem, cooperative intrusion detection system was proposed. In this detection system, nodes exchange attack features or signatures with a view of detecting an attack that has previously been detected by one of the other nodes in the system. Exchanging of attack features is necessary because a zero-day attacks (attacks without known signature) experienced in different locations are not the same. Although this solution enhanced the ability of a single IDS to respond to attacks that have been previously identified by cooperating nodes, malicious activities such as fake data injection, data manipulation or deletion and data consistency are problems threatening this approach. In this paper, we propose a solution that leverages blockchain’s distributive technology, tamper-proof ability and data immutability to detect and prevent malicious activities and solve data consistency problems facing cooperative intrusion detection. Focusing on extraction, storage and distribution stages of cooperative intrusion detection, we develop a blockchain-based solution that securely extracts features or signatures, adds extra verification step, makes storage of these signatures and features distributive and data sharing secured. Performance evaluation of the system with respect to its response time and resistance to the features/signatures injection is presented. The result shows that the proposed solution prevents stored attack features or signature against malicious data injection, manipulation or deletion and has low latency.more » « less
-
Social Virtual Reality Learning Environments (VRLE) offer a new medium for flexible and immersive learning environments with geo-distributed users. Ensuring user safety in VRLE application domains such as education, flight simulations, military training is of utmost importance. Specifically, there is a need to study the impact of ‘`immersion attacks’' (e.g., chaperone attack, occlusion) and other types of attacks/faults (e.g., unauthorized access, network congestion) that may cause user safety issues (i.e., inducing of cybersickness). In this paper, we present a novel framework to quantify the security, privacy issues triggered via immersion attacks and other types of attacks/faults. By using a real-world social VRLE viz., vSocial and creating a novel attack-fault tree model, we show that such attacks can induce undesirable levels of cybersickness. Next, we convert these attack-fault trees into stochastic timed automata (STA) representations to perform statistical model checking for a given attacker profile. Using this model checking approach, we determine the most vulnerable threat scenarios that can trigger high occurrence cases of cybersickness for VRLE users. Lastly, we show the effectiveness of our attack-fault tree modeling by incorporating suitable design principles such as hardening, diversity, redundancy and principle of least privilege to ensure user safety in a VRLE session.more » « less
-
Software repositories, used for wide-scale open software distribu- tion, are a significant vector for security attacks. Software signing provides authenticity, mitigating many such attacks. Developer- managed signing keys pose usability challenges, but certificate- based systems introduce privacy problems. This work, Speranza, uses certificates to verify software authenticity but still provides anonymity to signers using zero-knowledge identity co-commitments. In Speranza, a signer uses an automated certificate authority (CA) to create a private identity-bound signature and proof of authoriza- tion. Verifiers check that a signer was authorized to publish a pack- age without learning the signer’s identity. The package repository privately records each package’s authorized signers, but publishes only commitments to identities in a public map. Then, when issuing certificates, the CA issues the certificate to a distinct commitment to the same identity. The signer then creates a zero-knowledge proof that these are identity co-commitments. We implemented a proof-of-concept for Speranza. We find that costs to maintainers (signing) and end users (verifying) are small (sub-millisecond), even for a repository with millions of packages. Techniques inspired by recent key transparency systems reduce the bandwidth for serving authorization policies to 2 KiB. Server costs in this system are negligible. Our evaluation finds that Speranza is practical on the scale of the largest software repositories. We also emphasize practicality and deployability in this project. By building on existing technology and employing relatively sim- ple and well-established cryptographic techniques, Speranza can be deployed for wide-scale use with only a few hundred lines of code and minimal changes to existing infrastructure. Speranza is a practical way to bring privacy and authenticity together for more trustworthy open-source software.more » « less
An official website of the United States government

