skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Warning users about cyber threats through sounds
AbstractThis paper reports a formative evaluation of auditory representations of cyber security threat indicators and cues, referred to as sonifications, to warn users about cyber threats. Most Internet browsers provide visual cues and textual warnings to help users identify when they are at risk. Although these alarming mechanisms are very effective in informing users, there are certain situations and circumstances where these alarming techniques are unsuccessful in drawing the user’s attention: (1) security warnings and features (e.g., blocking out malicious Websites) might overwhelm a typical Internet user and thus the users may overlook or ignore visual and textual warnings and, as a result, they might be targeted, (2) these visual cues are inaccessible to certain users such as those with visual impairments. This work is motivated by our previous work of the use of sonification of security warnings to users who are visually impaired. To investigate the usefulness of sonification in general security settings, this work uses real Websites instead of simulated Web applications with sighted participants. The study targets sonification for three different types of security threats: (1) phishing, (2) malware downloading, and (3) form filling. The results show that on average 58% of the participants were able to correctly remember what the sonification conveyed. Additionally, about 73% of the participants were able to correctly identify the threat that the sonification represented while performing tasks using real Websites. Furthermore, the paper introduces “CyberWarner”, a sonification sandbox that can be installed on the Google Chrome browser to enable auditory representations of certain security threats and cues that are designed based on several URL heuristics. Article highlightsIt is feasible to develop sonified cyber security threat indicators that users intuitively understand with minimal experience and training.Users are more cautious about malicious activities in general. However, when navigating real Websites, they are less informed. This might be due to the appearance of the navigating Websites or the overwhelming issues when performing tasks.Participants’ qualitative responses indicate that even when they did not remember what the sonification conveyed, the sonification was able to capture the user’s attention and take safe actions in response.  more » « less
Award ID(s):
1347521 1564293
PAR ID:
10254362
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
SN Applied Sciences
Volume:
3
Issue:
7
ISSN:
2523-3963
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The Internet enables users to access vast resources, but it can also expose users to harmful cyber-attacks. It is imperative that users be informed about a security incident in a timely manner in order to make proper decisions. Visualization of security threats and warnings is one of the effective ways to inform users. However, visual cues are not always accessible to all users, and in particular, those with visual impairments. This late-breaking-work paper hypothesizes that the use of proper sounds in conjunction with visual cues can better represent security alerts to all users. Toward our research goal to validate this hypothesis, we first describe a methodology, referred to as sonification, to effectively design and develop auditory cyber-security threat indicators to warn users about cyber-attacks. Next, we present a case study, along with the results, of various types of usability testing conducted on a number of Internet users who are visually impaired. The presented concept can be viewed as a general framework for the creation and evaluation of human factor interactions with sounds in a cyber-space domain. The paper concludes with a discussion of future steps to enhance this work. 
    more » « less
  2. null (Ed.)
    Improving end-users’ awareness of cybersecurity warnings (e.g., phishing and malware alerts) remains a longstanding problem in usable security. Prior work suggests two key weaknesses with existing warnings: they are primarily communicated via saturated communication channels (e.g., visual, auditory, and vibrotactile); and, they are communicated rationally, not viscerally. We hypothesized that wrist-based affective haptics should address both of these weaknesses in a form-factor that is practically deployable: i.e., as a replaceable wristband compatible with modern smartwatches like the Apple Watch. To that end, we designed and implemented Spidey Sense, a wristband that produces customizable squeezing sensations to alert users to urgent cybersecurity warnings. To evaluate Spidey Sense, we applied a three-phased ‘Gen-Rank-Verify’ study methodology with 48 participants. We found evidence that, relative to vibrotactile alerts, Spidey Sense was considered more appropriate for the task of alerting people to cybersecurity warnings. 
    more » « less
  3. Navigating unfamiliar websites is challenging for users with visual impairments. Although many websites offer visual cues to facilitate access to pages/features most websites are expected to have (e.g., log in at the top right), such visual shortcuts are not accessible to users with visual impairments. Moreover, although such pages serve the same functionality across websites (e.g., to log in, to sign up), the location, wording, and navigation path of links to these pages vary from one website to another. Such inconsistencies are challenging for users with visual impairments, especially for users of screen readers, who often need to linearly listen to content of pages to figure out how to access certain website features. To study how to improve access to main website features, we iteratively designed and tested a command-based approach for main features of websites via a browser extension powered by machine learning and human input. The browser extension gives users a way to access high-level website features (e.g., log in, find stores, contact) via keyboard commands. We tested the browser extension in a lab setting with 15 Internet users, including 9 users with visual impairments and 6 without. Our study showed that commands for main website features can greatly improve the experience of users with visual impairments. People without visual impairments also found command-based access helpful when visiting unfamiliar, cluttered, or infrequently visited websites, suggesting that this approach can support users with visual impairments while also benefiting other user groups (i.e., universal design). Our study reveals concerns about the handling of unsupported commands and the availability and trustworthiness of human input. We discuss how websites, browsers, and assistive technologies could incorporate a command-based paradigm to enhance web accessibility and provide more consistency on the web to benefit users with varied abilities when navigating unfamiliar or complex websites. 
    more » « less
  4. Social touch is a common method of communication between individuals, but touch cues alone provide only a glimpse of the entire interaction. Visual and auditory cues are also present in these interactions, and increase the expressiveness and recognition of the conveyed information. However, most mediated touch interactions have focused on providing only haptic cues to the user. Our research addresses this gap by adding visual cues to a mediated social touch interaction through an array of LEDs attached to a wearable device. This device consists of an array of voice-coil actuators that present normal force to the user’s forearm to recreate the sensation of social touch gestures. We conducted a human subject study (N = 20) to determine the relative importance of the touch and visual cues. Our results demonstrate that visual cues, particularly color and pattern, significantly enhance perceived realism, as well as alter perceived touch intensity, valence, and dominance of the mediated social touch. These results illustrate the importance of closely integrating multisensory cues to create more expressive and realistic virtual interactions. 
    more » « less
  5. In virtual environments, many social cues (e.g. gestures, eye contact, and proximity) are currently conveyed visually or auditorily. Indicating social cues in other modalities, such as haptic cues to complement visual or audio signals, will help to increase VR’s accessibility and take advantage of the platform’s inherent flexibility. However, accessibility implementations in social VR are often siloed by single sensory modalities. To broaden the accessibility of social virtual reality beyond replacing one sensory modality with another, we identified a subset of social cues and built tools to enhance them allowing users to switch between modalities to choose how these cues are represented. Because consumer VR uses primarily visual and auditory stimuli, we started with social cues that were not accessible for blind and low vision (BLV) and d/Deaf and hard of hearing (DHH) people, and expanded how they could be represented to accommodate a number of needs. We describe how these tools were designed around the principle of social cue switching, and a standard distribution method to amplify reach. 
    more » « less