skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Understanding Emerging Obfuscation Technologies in Visual Description Services for Blind and Low Vision People
Blind and low vision people use visual description services (VDS) to gain visual interpretation and build access in a world that privileges sight. Despite their many benefits, VDS have many harmful privacy and security implications. As a result, researchers are suggesting, exploring, and building obfuscation systems that detect and obscure private or sensitive materials. However, as obfuscation depends largely on sight to interpret outcomes, it is unknown whether Blind and low vision people would find such approaches useful. Our work aims to center the perspectives and opinions of Blind and low vision people on the potential of obfuscation to address privacy concerns in VDS. By reporting on interviews with 20 Blind and low vision people who use VDS, our findings reveal that popular research trends in obfuscation fail to capture the needs of Blind and low vision people. While obfuscation might be helpful in gaining more control, tensions around obfuscation misrecognition and confirmation are prominent. We turn to the framework of interdependence to unpack and understand obfuscation in VDS, enabling us to complicate privacy concerns, uncover the labor of Blind and low vision people, and emphasize the importance of safeguards. We provide design directions to move the trajectory of obfuscation research forward.  more » « less
Award ID(s):
1763297 1552503
PAR ID:
10417760
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the ACM on Human-Computer Interaction
Volume:
6
Issue:
CSCW2
ISSN:
2573-0142
Page Range / eLocation ID:
1 to 33
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. People who are blind share their images and videos with companies that provide visual assistance technologies (VATs) to gain access to information about their surroundings. A challenge is that people who are blind cannot independently validate the content of the images and videos before they share them, and their visual data commonly contains private content. We examine privacy concerns for blind people who share personal visual data with VAT companies that provide descriptions authored by humans or artifcial intelligence (AI) . We frst interviewed 18 people who are blind about their perceptions of privacy when using both types of VATs. Then we asked the participants to rate 21 types of image content according to their level of privacy concern if the information was shared knowingly versus unknowingly with human- or AI-powered VATs. Finally, we analyzed what information VAT companies communicate to users about their collection and processing of users’ personal visual data through their privacy policies. Our fndings have implications for the development of VATs that safeguard blind users’ visual privacy, and our methods may be useful for other camera-based technology companies and their users. 
    more » « less
  2. Walking in environments with stairs and curbs is potentially dangerous for people with low vision. We sought to understand what challenges low vision people face and what strategies and tools they use when navigating such surface level changes. Using contextual inquiry, we interviewed and observed 14 low vision participants as they completed navigation tasks in two buildings and through two city blocks. The tasks involved walking in- and outdoors, across four staircases and two city blocks. We found that surface level changes were a source of uncertainty and even fear for all participants. Besides the white cane that many participants did not want to use, participants did not use technology in the study. Participants mostly used their vision, which was exhausting and sometimes deceptive. Our findings highlight the need for systems that support surface level changes and other depth-perception tasks; they should consider low vision people's distinct experiences from blind people, their sensitivity to different lighting conditions, and leverage visual enhancements. 
    more » « less
  3. How do life experiences impact cortical function? In people who are born blind, the “visual” cortices are recruited for nonvisual tasks such as Braille reading and sound localization (e.g., Collignon et al., 2011; Sadato et al., 1996). The mechanisms of this recruitment are not known. Do visual cortices have a latent capacity to respond to nonvisual information that is equal throughout the lifespan? Alternatively, is there a sensitive period of heightened plasticity that makes visual cortex repurposing possible during childhood? To gain insight into these questions, we leveraged naturalistic auditory stimuli to quantify and compare cross-modal responses congenitally blind (CB, n=22), adult-onset blind (vision loss >18 years-of-age, AB, n=14) and sighted (n=22) individuals. Participants listened to auditory excerpts from movies; a spoken narrative; and matched meaningless auditory stimuli (i.e., shuffled sentences, backwards speech) during fMRI scanning. These rich naturalistic stimuli made it possible to simultaneous engage a broad range of cognitive domains. We correlated the voxel-wise timecourses of different participants within each group. For all groups, all stimulus conditions induced synchrony in auditory cortex and for all groups only the narrative stimuli synchronized responses in higher-cognitive fronto-parietal and temporal regions. Inter-subject synchrony in visual cortices was high in the CB group for the movie and narrative stimuli but not for meaningless auditory controls. In contrast, visual cortex synchrony was equally low among AB and sighted blindfolded participants. Even many years of blindness in adulthood fail to enable responses to naturalistic auditory information in visual cortices of people who had sight as children. These findings suggest that cross-modal responses in visual cortex of people born blind reflect the plasticity of developing visual cortex during a sensitive period. 
    more » « less
  4. As social VR applications grow in popularity, blind and low vi- sion users encounter continued accessibility barriers. Yet social VR, which enables multiple people to engage in the same virtual space, presents a unique opportunity to allow other people to support a user’s access needs. To explore this opportunity, we designed a framework based on physical sighted guidance that enables a guide to support a blind or low vision user with navigation and visual interpretation. A user can virtually hold on to their guide and move with them, while the guide can describe the environment. We studied the use of our framework with 16 blind and low vision participants and found that they had a wide range of preferences. For example, we found that participants wanted to use their guide to support social interactions and establish a human connection with a human-appearing guide. We also highlight opportunities for novel guidance abilities in VR, such as dynamically altering an inaccessible environment. Through this work, we open a novel design space for a versatile approach for making VR fully accessible 
    more » « less
  5. People use videos to learn new recipes, exercises, and crafts. Such videos remain difficult for blind and low vision (BLV) people to follow as they rely on visual comparison. Our observations of visual rehabilitation therapists (VRTs) guiding BLV people to follow how-to videos revealed that VRTs provide both proactive and responsive support including detailed descriptions, non-visual workarounds, and progress feedback. We propose Vid2Coach, a system that transforms how-to videos into wearable camera-based assistants that provide accessible instructions and mixed-initiative feedback. From the video, Vid2Coach generates accessible instructions by augmenting narrated instructions with demonstration details and completion criteria for each step. It then uses retrieval-augmented-generation to extract relevant non-visual workarounds from BLV-specific resources. Vid2Coach then monitors user progress with a camera embedded in commercial smart glasses to provide context-aware instructions, proactive feedback, and answers to user questions. BLV participants (N=8) using Vid2Coach completed cooking tasks with 58.5\% fewer errors than when using their typical workflow and wanted to use Vid2Coach in their daily lives. Vid2Coach demonstrates an opportunity for AI visual assistance that strengthens rather than replaces non-visual expertise. 
    more » « less