skip to main content


Title: “Every Website Is a Puzzle!” : Facilitating Access to Common Website Features for People with Visual Impairments
Navigating unfamiliar websites is challenging for users with visual impairments. Although many websites offer visual cues to facilitate access to pages/features most websites are expected to have (e.g., log in at the top right), such visual shortcuts are not accessible to users with visual impairments. Moreover, although such pages serve the same functionality across websites (e.g., to log in, to sign up), the location, wording, and navigation path of links to these pages vary from one website to another. Such inconsistencies are challenging for users with visual impairments, especially for users of screen readers, who often need to linearly listen to content of pages to figure out how to access certain website features. To study how to improve access to main website features, we iteratively designed and tested a command-based approach for main features of websites via a browser extension powered by machine learning and human input. The browser extension gives users a way to access high-level website features (e.g., log in, find stores, contact) via keyboard commands. We tested the browser extension in a lab setting with 15 Internet users, including 9 users with visual impairments and 6 without. Our study showed that commands for main website features can greatly improve the experience of users with visual impairments. People without visual impairments also found command-based access helpful when visiting unfamiliar, cluttered, or infrequently visited websites, suggesting that this approach can support users with visual impairments while also benefiting other user groups (i.e., universal design). Our study reveals concerns about the handling of unsupported commands and the availability and trustworthiness of human input. We discuss how websites, browsers, and assistive technologies could incorporate a command-based paradigm to enhance web accessibility and provide more consistency on the web to benefit users with varied abilities when navigating unfamiliar or complex websites.  more » « less
Award ID(s):
2028387
NSF-PAR ID:
10428944
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
ACM Transactions on Accessible Computing
Volume:
15
Issue:
3
ISSN:
1936-7228
Page Range / eLocation ID:
1 to 35
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    This paper reports a formative evaluation of auditory representations of cyber security threat indicators and cues, referred to as sonifications, to warn users about cyber threats. Most Internet browsers provide visual cues and textual warnings to help users identify when they are at risk. Although these alarming mechanisms are very effective in informing users, there are certain situations and circumstances where these alarming techniques are unsuccessful in drawing the user’s attention: (1) security warnings and features (e.g., blocking out malicious Websites) might overwhelm a typical Internet user and thus the users may overlook or ignore visual and textual warnings and, as a result, they might be targeted, (2) these visual cues are inaccessible to certain users such as those with visual impairments. This work is motivated by our previous work of the use of sonification of security warnings to users who are visually impaired. To investigate the usefulness of sonification in general security settings, this work uses real Websites instead of simulated Web applications with sighted participants. The study targets sonification for three different types of security threats: (1) phishing, (2) malware downloading, and (3) form filling. The results show that on average 58% of the participants were able to correctly remember what the sonification conveyed. Additionally, about 73% of the participants were able to correctly identify the threat that the sonification represented while performing tasks using real Websites. Furthermore, the paper introduces “CyberWarner”, a sonification sandbox that can be installed on the Google Chrome browser to enable auditory representations of certain security threats and cues that are designed based on several URL heuristics.

    Article highlights

    It is feasible to develop sonified cyber security threat indicators that users intuitively understand with minimal experience and training.

    Users are more cautious about malicious activities in general. However, when navigating real Websites, they are less informed. This might be due to the appearance of the navigating Websites or the overwhelming issues when performing tasks.

    Participants’ qualitative responses indicate that even when they did not remember what the sonification conveyed, the sonification was able to capture the user’s attention and take safe actions in response.

     
    more » « less
  2. Web data items such as shopping products, classifieds, and job listings are indispensable components of most e-commerce websites. The information on the data items are typically distributed over two or more webpages, e.g., a ‘Query-Results’ page showing the summaries of the items, and ‘Details’ pages containing full information about the items. While this organization of data mitigates information overload and visual cluttering for sighted users, it however increases the interaction overhead and effort for blind users, as back-and-forth navigation between webpages using screen reader assistive technology is tedious and cumbersome. Existing usability-enhancing solutions are unable to provide adequate support in this regard as they predominantly focus on enabling efficient content access within a single webpage, and as such are not tailored for content distributed across multiple webpages. As an initial step towards addressing this issue, we developed AutoDesc, a browser extension that leverages a custom extraction model to automatically detect and pull out additional item descriptions from the ‘details’ pages, and then proactively inject the extracted information into the ‘Query-Results’ page, thereby reducing the amount of back-and-forth screen reader navigation between the two webpages. In a study with 16 blind users, we observed that within the same time duration, the participants were able to peruse significantly more data items on average with AutoDesc, compared to that with their preferred screen readers as well as with a state-of-the-art solution. 
    more » « less
  3. To counteract the ads and third-party tracking ubiquitous on the web, users turn to blocking tools---ad-blocking and tracking-protection browser extensions and built-in features. Unfortunately, blocking tools can cause non-ad, non-tracking elements of a website to degrade or fail, a phenomenon termed breakage. Examples include missing images, non-functional buttons, and pages failing to load. While the literature frequently discusses breakage, prior work has not systematically mapped and disambiguated the spectrum of user experiences subsumed under "breakage," nor sought to understand how users experience, prioritize, and attempt to fix breakage. We fill these gaps. First, through qualitative analysis of 18,932 extension-store reviews and GitHub issue reports for ten popular blocking tools, we developed novel taxonomies of 38 specific types of breakage and 15 associated mitigation strategies. To understand subjective experiences of breakage, we then conducted a 95-participant survey. Nearly all participants had experienced various types of breakage, and they employed an array of strategies of variable effectiveness in response to specific types of breakage in specific contexts. Unfortunately, participants rarely notified anyone who could fix the root causes. We discuss how our taxonomies and results can improve the comprehensiveness and prioritization of ongoing attempts to automatically detect and fix breakage. 
    more » « less
  4. Website privacy policies sometimes provide users the option to opt-out of certain collections and uses of their personal data. Unfortunately, many privacy policies bury these instructions deep in their text, and few web users have the time or skill necessary to discover them. We describe a method for the automated detection of opt-out choices in privacy policy text and their presentation to users through a web browser extension. We describe the creation of two corpora of opt-out choices, which enable the training of classifiers to identify opt-outs in privacy policies. Our overall approach for extracting and classifying opt-out choices combines heuristics to identify commonly found opt-out hyperlinks with supervised machine learning to automatically identify less conspicuous instances. Our approach achieves a precision of 0.93 and a recall of 0.9. We introduce Opt-Out Easy, a web browser extension designed to present available opt-out choices to users as they browse the web. We evaluate the usability of our browser extension with a user study. We also present results of a large-scale analysis of opt-outs found in the text of thousands of the most popular websites. 
    more » « less
  5. Abstract Objective . Brain–computer interfaces (BCIs) show promise as a direct line of communication between the brain and the outside world that could benefit those with impaired motor function. But the commands available for BCI operation are often limited by the ability of the decoder to differentiate between the many distinct motor or cognitive tasks that can be visualized or attempted. Simple binary command signals (e.g. right hand at rest versus movement) are therefore used due to their ability to produce large observable differences in neural recordings. At the same time, frequent command switching can impose greater demands on the subject’s focus and takes time to learn. Here, we attempt to decode the degree of effort in a specific movement task to produce a graded and more flexible command signal. Approach. Fourteen healthy human subjects (nine male, five female) responded to visual cues by squeezing a hand dynamometer to different levels of predetermined force, guided by continuous visual feedback, while the electroencephalogram (EEG) and grip force were monitored. Movement-related EEG features were extracted and modeled to predict exerted force. Main results. We found that event-related desynchronization (ERD) of the 8–30 Hz mu-beta sensorimotor rhythm of the EEG is separable for different degrees of motor effort. Upon four-fold cross-validation, linear classifiers were found to predict grip force from an ERD vector with mean accuracies across subjects of 53% and 55% for the dominant and non-dominant hand, respectively. ERD amplitude increased with target force but appeared to pass through a trough that hinted at non-monotonic behavior. Significance. Our results suggest that modeling and interactive feedback based on the intended level of motor effort is feasible. The observed ERD trends suggest that different mechanisms may govern intermediate versus low and high degrees of motor effort. This may have utility in rehabilitative protocols for motor impairments. 
    more » « less