skip to main content


Title: Understanding Pattern Recognition Through Sound with Considerations for Developing Accessible Technologies
This work explores whether audio feedback style and user ability influences user techniques, performance, and preference in the interpretation of node graph data among sighted individuals and those who are blind or visually impaired. This study utilized a posttest-only basic randomized design comparing two treatments, in which participants listened to short audio clips describing a sequence of transitions occurring in a node graph. The results found that participants tend to use certain techniques and have corresponding preferences based on their ability. A correlation was also found between equivalently high feedback design performance and lack of overall feedback design preference. These results imply that universal technologies should consider avoiding utilizing design constraints that allow for only one optimal usage technique, especially if that technique is dependent on a user’s ability.  more » « less
Award ID(s):
1828010
NSF-PAR ID:
10277359
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Lecture notes in computer science
Volume:
12426 LNCS
ISSN:
1611-3349
Page Range / eLocation ID:
208-219
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this work, we investigate the influence of different visualizations on a manipulation task in virtual reality (VR). Without the haptic feedback of the real world, grasping in VR might result in intersections with virtual objects. As people are highly sensitive when it comes to perceiving collisions, it might look more appealing to avoid intersections and visualize non-colliding hand motions. However, correcting the position of the hand or fingers results in a visual-proprioceptive discrepancy and must be used with caution. Furthermore, the lack of haptic feedback in the virtual world might result in slower actions as a user might not know exactly when a grasp has occurred. This reduced performance could be remediated with adequate visual feedback. In this study, we analyze the performance, level of ownership, and user preference of eight different visual feedback techniques for virtual grasping. Three techniques show the tracked hand (with or without grasping feedback), even if it intersects with the grasped object. Another three techniques display a hand without intersections with the object, called outer hand, simulating the look of a real world interaction. One visualization is a compromise between the two groups, showing both a primary outer hand and a secondary tracked hand. Finally, in the last visualization the hand disappears during the grasping activity. In an experiment, users perform a pick-and-place task for each feedback technique. We use high fidelity marker-based hand tracking to control the virtual hands in real time. We found that the tracked hand visualizations result in better performance, however, the outer hand visualizations were preferred. We also find indications that ownership is higher with the outer hand visualizations. 
    more » « less
  2. null (Ed.)
    In this work, we investigate the influence that audio and visual feedback have on a manipulation task in virtual reality (VR). Without the tactile feedback of a controller, grasping virtual objects using one’s hands can result in slower interactions because it may be unclear to the user that a grasp has occurred. Providing alternative feedback, such as visual or audio cues, may lead to faster and more precise interactions, but might also affect user preference and perceived ownership of the virtual hands. In this study, we test four feedback conditions for virtual grasping. Three of the conditions provide feedback for when a grasp or release occurs, either visual, audio, or both, and one provides no feedback for these occurrences. We analyze the effect each feedback condition has on interaction performance, measure their effect on the perceived ownership of the virtual hands, and gauge user preference. In an experiment, users perform a pick-and-place task with each feedback condition. We found that audio feedback for grasping is preferred over visual feedback even though it seems to decrease grasping performance, and found that there were little to no differences in ownership between our conditions. 
    more » « less
  3. Many of the everyday decisions a user makes rely on the suggestions of online recommendation systems. These systems amass implicit (e.g., location, purchase history, browsing history) and explicit (e.g., reviews, ratings) feedback from multiple users, produce a general consensus, and provide suggestions based on that consensus. However, due to privacy concerns, users are uncomfortable with implicit data collection, thus requiring recommendation systems to be overly dependent on explicit feedback. Unfortunately, users do not frequently provide explicit feedback. This hampers the ability of recommendation systems to provide high-quality suggestions. We introduce Heimdall, the first privacy-respecting implicit preference collection framework that enables recommendation systems to extract user preferences from their activities in a privacy respect- ing manner. The key insight is to enable recommendation systems to run a collector on a user’s device and precisely control the information a collector transmits to the recommendation system back- end. Heimdall introduces immutable blobs as a mechanism to guarantee this property. We implemented Heimdall on the Android plat- form and wrote three example collectors to enhance recommendation systems with implicit feedback. Our performance results suggest that the overhead of immutable blobs is minimal, and a user study of 166 participants indicates that privacy concerns are significantly less when collectors record only specific information—a property that Heimdall enables. 
    more » « less
  4. Abstract

    In multiple watershed planning and design problems, such as conservation planning, quantitative estimates of costs, and environmental benefits of proposed conservation decisions may not be the only criteria that influence stakeholders' preferences for those decisions. Their preferences may also be influenced by the conservation decision itself—specifically, the type of practice, where it is being proposed, existing biases, and previous experiences with the practice. While human‐in‐the‐loop type search techniques, such as Interactive Genetic Algorithms (IGA), provide opportunities for stakeholders to incorporate their preferences in the design of alternatives, examination of user‐preferred conservation design alternatives for patterns in Decision Space can provide insights into which local decisions have higher or lower agreement among stakeholders. In this paper, we explore and compare spatial patterns in conservation decisions (specifically involving cover crops and filter strips) within design alternatives generated by IGA and noninteractive GA. Methods for comparing patterns include nonvisual as well as visualization approaches, including a novel visual analytics technique. Results for the study site show that user‐preferred designs generated by all participants had strong bias for cover crops in a majority (50%–83%) of the subbasins. Further, exploration with heat maps visualization indicate that IGA‐based search yielded very different spatial patterns of user‐preferred decisions in subbasins in comparison to decisions within design alternatives that were generated without the human‐in‐the‐loop. Finally, the proposed coincident‐nodes, multiedge graph visualization was helpful in visualizing disagreement among participants in local subbasin scale decisions, and for visualizing spatial patterns in local subbasin scale costs and benefits.

     
    more » « less
  5. For people with visual impairments, photography is essential in identifying objects through remote sighted help and image recognition apps. This is especially the case for teachable object recognizers, where recognition models are trained on user's photos. Here, we propose real-time feedback for communicating the location of an object of interest in the camera frame. Our audio-haptic feedback is powered by a deep learning model that estimates the object center location based on its proximity to the user's hand. To evaluate our approach, we conducted a user study in the lab, where participants with visual impairments (N=9) used our feedback to train and test their object recognizer in vanilla and cluttered environments. We found that very few photos did not include the object (2% in the vanilla and 8% in the cluttered) and the recognition performance was promising even for participants with no prior camera experience. Participants tended to trust the feedback even though they know it can be wrong. Our cluster analysis indicates that better feedback is associated with photos that include the entire object. Our results provide insights into factors that can degrade feedback and recognition performance in teachable interfaces. 
    more » « less