Texting relies on screen-centric prompts designed for sighted users, still posing significant barriers to people who are blind and visually impaired (BVI). Can we re-imagine texting untethered from a visual display? In an interview study, 20 BVI adults shared situations surrounding their texting practices, recurrent topics of conversations, and challenges. Informed by these insights, we introduce TextFlow : a mixed-initiative context-aware system that generates entirely auditory message options relevant to the users’ location, activity, and time of the day. Users can browse and select suggested aural messages using finger-taps supported by an off-the-shelf finger-worn device, without having to hold or attend to a mobile screen. In an evaluative study, 10 BVI participants successfully interacted with TextFlow to browse and send messages in screen-free mode. The experiential response of the users shed light on the importance of bypassing the phone and accessing rapidly controllable messages at their fingertips while preserving privacy and accuracy with respect to speech or screen-based input. We discuss how non-visual access to proactive, contextual messaging can support the blind in a variety of daily scenarios.
more »
« less
“I stepped into a puddle”: Non-Visual Texting in Nomadic Contexts
Despite growing interest in accessible texting for people who are blind and visually impaired (BVI), little is known about the practice of texting when on the move, and especially while using assistive technologies. To address this gap, we conducted an interview-based study with 20 BVIs who text while travelling. Our findings revealed that participants engage in text outside their home in four recurrent situations: walking to a destination, waiting for public transportation, riding in a vehicle, or approaching a point of interest. Moreover, to safely send a text, participants express the need for receiving a range of information about their surroundings, including the distance to destination, upcoming obstacles, traffic jams, and weather conditions. Based on these findings, we examine three modes of situational feedback cues to integrate with messaging applications: text-based, sound effects, and tactile. Our work discusses design directions to enhance the texting experience in nomadic contexts for people who are blind and visually impaired.
more »
« less
- Award ID(s):
- 1909845
- PAR ID:
- 10469288
- Editor(s):
- W4A '23: Proceedings of the 20th International Web for All Conference April 2023 Pages 32–43
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9798400707483
- Page Range / eLocation ID:
- 32 to 43
- Format(s):
- Medium: X
- Location:
- Austin TX USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Texting relies on screen-centric prompts designed for sighted users, still posing significant barriers to people who are blind and visually impaired (BVI). Can we re-imagine texting untethered from a visual display? In an interview study, 20 BVI adults shared situations surrounding their texting practices, recurrent topics of conversations, and challenges. Informed by these insights, we introduce TextFlow: a mixed-initiative context-aware system that generates entirely auditory message options relevant to the users’ location, activity, and time of the day. Users can browse and select suggested aural messages using finger-taps supported by an off-the-shelf finger-worn device, without having to hold or attend to a mobile screen. In an evaluative study, 10 BVI participants successfully interacted with TextFlow to browse and send messages in screen-free mode. The experiential response of the users shed light on the importance of bypassing the phone and accessing rapidly controllable messages at their fingertips while preserving privacy and accuracy with respect to speech or screen-based input. We discuss how non-visual access to proactive, contextual messaging can support the blind in a variety of daily scenarios.more » « less
-
null (Ed.)We present a multimodal deep learning framework that can generate summarization text supporting the main idea of an information graphic for presentation to a person who is blind or visually impaired. The framework utilizes the visual, textual, positional, and size characteristics extracted from the image to create the summary. Different and complimentary neural architectures are optimized for each task using crowdsourced training data. From our quantitative experiments and results, we explain the reasoning behind our framework and show the effectiveness of our models. Our qualitative results showcase text generated from our framework and show that Mechanical Turk participants favor them to other automatic and human generated summarizations. We describe the design and results of an experiment to evaluate the utility of our system for people who have visual impairments in the context of understanding Twitter Tweets containing line graphs.more » « less
-
Many images on the Web, including photographs and artistic images, feature spatial relationships between objects that are inaccessible to someone who is blind or visually impaired even when a text description is provided. While some tools exist to manually create accessible image descriptions, this work is time consuming and requires specialized tools. We introduce an approach that automatically creates spatially registered image labels based on how a sighted person naturally interacts with the image. Our system collects behavioral data from sighted viewers of an image, specifically eye gaze data and spoken descriptions, and uses them to generate a spatially indexed accessible image that can then be explored using an audio-based touch screen application. We describe our approach to assigning text labels to locations in an image based on eye gaze. We then report on two formative studies with blind users testing EyeDescribe. Our approach resulted in correct labels for all objects in our image set. Participants were able to better recall the location of objects when given both object labels and spatial locations. This approach provides a new method for creating accessible images with minimum required effort.more » « less
-
The College Board's AP Computer Science Principles (CSP) content has become a major new course for introducing K-12 students to the discipline. The course was designed for many reasons, but one major goal was to broaden participation. While significant work has been completed toward equity by many research groups, we know of no systematic analysis of CSP content created by major vendors in relation to accessibility for students with disabilities, especially those who are blind or visually impaired. In this experience report, we discuss two major actions by our team to make CSP more accessible. First, with the help of accessibility experts and teachers, we modified the entire Code.org CSP course to make it accessible. Second, we conducted a one-week professional development workshop in the summer of 2018 for teachers of blind or visually impaired students in order to help them prepare to teach CSP or support those who do. We report here on lessons learned that are useful to teachers who have blind or visually impaired students in their classes, to AP CSP curriculum providers, and to the College Board.more » « less