skip to main content

Title: Warning users about cyber threats through sounds
Abstract This paper reports a formative evaluation of auditory representations of cyber security threat indicators and cues, referred to as sonifications, to warn users about cyber threats. Most Internet browsers provide visual cues and textual warnings to help users identify when they are at risk. Although these alarming mechanisms are very effective in informing users, there are certain situations and circumstances where these alarming techniques are unsuccessful in drawing the user’s attention: (1) security warnings and features (e.g., blocking out malicious Websites) might overwhelm a typical Internet user and thus the users may overlook or ignore visual and textual warnings and, as a result, they might be targeted, (2) these visual cues are inaccessible to certain users such as those with visual impairments. This work is motivated by our previous work of the use of sonification of security warnings to users who are visually impaired. To investigate the usefulness of sonification in general security settings, this work uses real Websites instead of simulated Web applications with sighted participants. The study targets sonification for three different types of security threats: (1) phishing, (2) malware downloading, and (3) form filling. The results show that on average 58% of the participants were able more » to correctly remember what the sonification conveyed. Additionally, about 73% of the participants were able to correctly identify the threat that the sonification represented while performing tasks using real Websites. Furthermore, the paper introduces “CyberWarner”, a sonification sandbox that can be installed on the Google Chrome browser to enable auditory representations of certain security threats and cues that are designed based on several URL heuristics. Article highlights It is feasible to develop sonified cyber security threat indicators that users intuitively understand with minimal experience and training. Users are more cautious about malicious activities in general. However, when navigating real Websites, they are less informed. This might be due to the appearance of the navigating Websites or the overwhelming issues when performing tasks. Participants’ qualitative responses indicate that even when they did not remember what the sonification conveyed, the sonification was able to capture the user’s attention and take safe actions in response. « less
Authors:
; ; ;
Award ID(s):
1347521 1564293
Publication Date:
NSF-PAR ID:
10281767
Journal Name:
SN Applied Sciences
Volume:
3
Issue:
7
ISSN:
2523-3963
Sponsoring Org:
National Science Foundation
More Like this
  1. The Internet enables users to access vast resources, but it can also expose users to harmful cyber-attacks. It is imperative that users be informed about a security incident in a timely manner in order to make proper decisions. Visualization of security threats and warnings is one of the effective ways to inform users. However, visual cues are not always accessible to all users, and in particular, those with visual impairments. This late-breaking-work paper hypothesizes that the use of proper sounds in conjunction with visual cues can better represent security alerts to all users. Toward our research goal to validate this hypothesis, we first describe a methodology, referred to as sonification, to effectively design and develop auditory cyber-security threat indicators to warn users about cyber-attacks. Next, we present a case study, along with the results, of various types of usability testing conducted on a number of Internet users who are visually impaired. The presented concept can be viewed as a general framework for the creation and evaluation of human factor interactions with sounds in a cyber-space domain. The paper concludes with a discussion of future steps to enhance this work.
  2. Background: Drivers gather most of the information they need to drive by looking at the world around them and at visual displays within the vehicle. Navigation systems automate the way drivers navigate. In using these systems, drivers offload both tactical (route following) and strategic aspects (route planning) of navigational tasks to the automated SatNav system, freeing up cognitive and attentional resources that can be used in other tasks (Burnett, 2009). Despite the potential benefits and opportunities that navigation systems provide, their use can also be problematic. For example, research suggests that drivers using SatNav do not develop as much environmental spatial knowledge as drivers using paper maps (Waters & Winter, 2011; Parush, Ahuvia, & Erev, 2007). With recent growth and advances of augmented reality (AR) head-up displays (HUDs), there are new opportunities to display navigation information directly within a driver’s forward field of view, allowing them to gather information needed to navigate without looking away from the road. While the technology is promising, the nuances of interface design and its impacts on drivers must be further understood before AR can be widely and safely incorporated into vehicles. Specifically, an impact that warrants investigation is the role of AR HUDS inmore »spatial knowledge acquisition while driving. Acquiring high levels of spatial knowledge is crucial for navigation tasks because individuals who have greater levels of spatial knowledge acquisition are more capable of navigating based on their own internal knowledge (Bolton, Burnett, & Large, 2015). Moreover, the ability to develop an accurate and comprehensive cognitive map acts as a social function in which individuals are able to navigate for others, provide verbal directions and sketch direction maps (Hill, 1987). Given these points, the relationship between spatial knowledge acquisition and novel technologies such as AR HUDs in driving is a relevant topic for investigation. Objectives: This work explored whether providing conformal AR navigational cues improves spatial knowledge acquisition (as compared to traditional HUD visual cues) to assess the plausibility and justification for investment in generating larger FOV AR HUDs with potentially multiple focal planes. Methods: This study employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium fidelity driving simulator for where participants drove while navigating with one of two possible HUD interface designs: a world-relative arrow post sign and a screen-relative traditional arrow. During the 10-15 minute drive, participants drove the route and were encouraged to verbally share feedback as they proceeded. After the drive, participants completed a NASA-TLX questionnaire to record their perceived workload. We measured spatial knowledge at two levels: landmark and route knowledge. Landmark knowledge was assessed using an iconic recognition task, while route knowledge was assessed using a scene ordering task. After completion of the study, individuals signed a post-trial consent form and were compensated $10 for their time. Results: NASA-TLX performance subscale ratings revealed that participants felt that they performed better during the world-relative condition but at a higher rate of perceived workload. However, in terms of perceived workload, results suggest there is no significant difference between interface design conditions. Landmark knowledge results suggest that the mean number of remembered scenes among both conditions is statistically similar, indicating participants using both interface designs remembered the same proportion of on-route scenes. Deviance analysis show that only maneuver direction had an influence on landmark knowledge testing performance. Route knowledge results suggest that the proportion of scenes on-route which were correctly sequenced by participants is similar under both conditions. Finally, participants exhibited poorer performance in the route knowledge task as compared to landmark knowledge task (independent of HUD interface design). Conclusions: This study described a driving simulator study which evaluated the head-up provision of two types of AR navigation interface designs. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. Overall results of this initial study provide evidence that the use of both screen-relative and world-relative AR head-up display interfaces have similar impact on spatial knowledge acquisition and perceived workload while driving. These results contrast a common perspective in the AR community that conformal, world-relative graphics are inherently more effective. This study instead suggests that simple, screen-fixed designs may indeed be effective in certain contexts.« less
  3. The introduction of advanced technologies has made driving a more automated activity. However, most vehicles are not designed with cybersecurity considerations and hence, they are susceptible to cyberattacks. When such incidents happen, it is critical for drivers to respond properly. The goal of this study was to observe drivers’ responses to unexpected vehicle cyberattacks while driving in a simulated environment and to gain deeper insights into their perceptions of vehicle cybersecurity. Ten participants completed the experiment and the results showed that they perceived and responded differently to each vehicle cyberattack. Participants correctly identified the cybersecurity issue and took according action when the issue caused a noticeable visual and auditory response. Participants preferred to be clearly informed about what happened and what to do through a combination of visual, tactile, and auditory warnings. The lack of knowledge of vehicle cybersecurity was obvious among participants.
  4. Improving end-users’ awareness of cybersecurity warnings (e.g., phishing and malware alerts) remains a longstanding problem in usable security. Prior work suggests two key weaknesses with existing warnings: they are primarily communicated via saturated communication channels (e.g., visual, auditory, and vibrotactile); and, they are communicated rationally, not viscerally. We hypothesized that wrist-based affective haptics should address both of these weaknesses in a form-factor that is practically deployable: i.e., as a replaceable wristband compatible with modern smartwatches like the Apple Watch. To that end, we designed and implemented Spidey Sense, a wristband that produces customizable squeezing sensations to alert users to urgent cybersecurity warnings. To evaluate Spidey Sense, we applied a three-phased ‘Gen-Rank-Verify’ study methodology with 48 participants. We found evidence that, relative to vibrotactile alerts, Spidey Sense was considered more appropriate for the task of alerting people to cybersecurity warnings.
  5. Malicious software (malware) poses a significant threat to the security of our networks and users. In the ever-evolving malware landscape, Excel 4.0 Office macros (XL4) have recently become an important attack vector. These macros are often hidden within apparently legitimate documents and under several layers of obfuscation. As such, they are difficult to analyze using static analysis techniques. Moreover, the analysis in a dynamic analysis environment (a sandbox) is challenging because the macros execute correctly only under specific environmental conditions that are not always easy to create. This paper presents SYMBEXCEL, a novel solution that leverages symbolic execution to deobfuscate and analyze Excel 4.0 macros automatically. Our approach proceeds in three stages: (1) The malicious document is parsed and loaded in memory; (2) Our symbolic execution engine executes the XL4 formulas; and (3) Our Engine concretizes any symbolic values encountered during the symbolic exploration, therefore evaluating the execution of each macro under a broad range of (meaningful) environment configurations. SYMBEXCEL significantly outperforms existing deobfuscation tools, allowing us to reliably extract Indicators of Compromise (IoCs) and other critical forensics information. Our experiments demonstrate the effectiveness of our approach, especially in deobfuscating novel malicious documents that make heavy use of environment variablesmore »and are often not identified by commercial anti-virus software.« less