Like many parents, visually impaired parents (VIPs) read books with their children. However, research on accessible reading technologies predominantly focuses on blind adults reading alone or sighted adults reading with blind children, such that the motivations, strategies, and needs of blind parents reading with their sighted children are still largely undocumented. To address this gap, we interviewed 13 VIPs with young children. We found that VIPs (1) sought familial intimacy through reading with their child, often prioritizing intimacy over their own access needs, (2) took on many types of access labor to read with their children, and (3) desired novel assistive technologies (ATs) for reading that prioritize intimacy while reducing access labor. We contribute the notion of Intimate AT, along with a demonstrative design space, which together constitute a new design paradigm that draws attention to intimacy as a facet of both independently and collaboratively accessible ATs.
more »
« less
Navigable Space and Traversable Edges Differentially Influence Reorientation in Sighted and Blind Mice
Reorientation enables navigators to regain their bearings after becoming lost. Disoriented individuals primarily reorient themselves using the geometry of a layout, even when other informative cues, such as landmarks, are present. Yet the specific strategies that animals use to determine geometry are unclear. Moreover, because vision allows subjects to rapidly form precise representations of objects and background, it is unknown whether it has a deterministic role in the use of geometry. In this study, we tested sighted and congenitally blind mice ( Ns = 8–11) in various settings in which global shape parameters were manipulated. Results indicated that the navigational affordances of the context—the traversable space—promote sampling of boundaries, which determines the effective use of geometric strategies in both sighted and blind mice. However, blind animals can also effectively reorient themselves using 3D edges by extensively patrolling the borders, even when the traversable space is not limited by these boundaries.
more »
« less
- Award ID(s):
- 1924732
- PAR ID:
- 10368040
- Publisher / Repository:
- SAGE Publications
- Date Published:
- Journal Name:
- Psychological Science
- Volume:
- 33
- Issue:
- 6
- ISSN:
- 0956-7976
- Page Range / eLocation ID:
- p. 925-947
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Evaluating the quality of accessible image captions with human raters is difficult, as it may be difficult for a visually impaired user to know how comprehensive a caption is, whereas a sighted assistant may not know what information a user will need from a caption. To explore how image captioners and caption consumers assess caption content, we conducted a series of collaborative captioning sessions in which six pairs, consisting of a blind person and their sighted partner, worked together to discuss, create, and evaluate image captions. By making captioning a collaborative task, we were able to observe captioning strategies, to elicit questions and answers about image captions, and to explore blind users’ caption preferences. Our findings provide insight about the process of creating good captions and serve as a case study for cross-ability collaboration between blind and sighted people.more » « less
-
Given that most cues exchanged during a social interaction are nonverbal (e.g., facial expressions, hand gestures, body language), individuals who are blind are at a social disadvantage compared to their sighted peers. Very little work has explored sensory augmentation in the context of social assistive aids for individuals who are blind. The purpose of this study is to explore the following questions related to visual-to-vibrotactile mapping of facial action units (the building blocks of facial expressions): (1) How well can individuals who are blind recognize tactile facial action units compared to those who are sighted? (2) How well can individuals who are blind recognize emotions from tactile facial action units compared to those who are sighted? These questions are explored in a preliminary pilot test using absolute identification tasks in which participants learn and recognize vibrotactile stimulations presented through the Haptic Chair, a custom vibrotactile display embedded on the back of a chair. Study results show that individuals who are blind are able to recognize tactile facial action units as well as those who are sighted. These results hint at the potential for tactile facial action units to augment and expand access to social interactions for individuals who are blind.more » « less
-
Abstract We present an experimental investigation of spatial audio feedback using smartphones to support direction localization in pointing tasks for people with visual impairments (PVIs). We do this using a mobile game based on a bow-and-arrow metaphor. Our game provides a combination of spatial and non-spatial (sound beacon) audio to help the user locate the direction of the target. Our experiments with sighted, sighted-blindfolded, and visually impaired users shows that (a) the efficacy of spatial audio is relatively higher for PVIs than for blindfolded sighted users during the initial reaction time for direction localization, (b) the general behavior between PVIs and blind-folded individuals is statistically similar, and (c) the lack of spatial audio significantly reduces the localization performance even in sighted blind-folded users. Based on our findings, we discuss the system and interaction design implications for making future mobile-based spatial interactions accessible to PVIs.more » « less
-
Many images on the Web, including photographs and artistic images, feature spatial relationships between objects that are inaccessible to someone who is blind or visually impaired even when a text description is provided. While some tools exist to manually create accessible image descriptions, this work is time consuming and requires specialized tools. We introduce an approach that automatically creates spatially registered image labels based on how a sighted person naturally interacts with the image. Our system collects behavioral data from sighted viewers of an image, specifically eye gaze data and spoken descriptions, and uses them to generate a spatially indexed accessible image that can then be explored using an audio-based touch screen application. We describe our approach to assigning text labels to locations in an image based on eye gaze. We then report on two formative studies with blind users testing EyeDescribe. Our approach resulted in correct labels for all objects in our image set. Participants were able to better recall the location of objects when given both object labels and spatial locations. This approach provides a new method for creating accessible images with minimum required effort.more » « less
An official website of the United States government
