Title: Understanding Pattern Recognition Through Sound with Considerations for Developing Accessible Technologies
This work explores whether audio feedback style and user ability influences user techniques, performance, and preference in the interpretation of node graph data among sighted individuals and those who are blind or visually impaired. This study utilized a posttest-only basic randomized design comparing two treatments, in which participants listened to short audio clips describing a sequence of transitions occurring in a node graph. The results found that participants tend to use certain techniques and have corresponding preferences based on their ability. A correlation was also found between equivalently high feedback design performance and lack of overall feedback design preference. These results imply that universal technologies should consider avoiding utilizing design constraints that allow for only one optimal usage technique, especially if that technique is dependent on a user’s ability. more »« less
Canales, Ryan; Jörg, Sophie
(, Motion, Interaction and Games)
null
(Ed.)
In this work, we investigate the influence that audio and visual feedback have on a manipulation task in virtual reality (VR). Without the tactile feedback of a controller, grasping virtual objects using one’s hands can result in slower interactions because it may be unclear to the user that a grasp has occurred. Providing alternative feedback, such as visual or audio cues, may lead to faster and more precise interactions, but might also affect user preference and perceived ownership of the virtual hands. In this study, we test four feedback conditions for virtual grasping. Three of the conditions provide feedback for when a grasp or release occurs, either visual, audio, or both, and one provides no feedback for these occurrences. We analyze the effect each feedback condition has on interaction performance, measure their effect on the perceived ownership of the virtual hands, and gauge user preference. In an experiment, users perform a pick-and-place task with each feedback condition. We found that audio feedback for grasping is preferred over visual feedback even though it seems to decrease grasping performance, and found that there were little to no differences in ownership between our conditions.
BackgroundVisual disability is a growing problem for many middle-aged and older adults. Conventional mobility aids, such as white canes and guide dogs, have notable limitations that have led to increasing interest in electronic travel aids (ETAs). Despite remarkable progress, current ETAs lack empirical evidence and realistic testing environments and often focus on the substitution or augmentation of a single sense. ObjectiveThis study aims to (1) establish a novel virtual reality (VR) environment to test the efficacy of ETAs in complex urban environments for a simulated visual impairment (VI) and (2) evaluate the impact of haptic and audio feedback, individually and combined, on navigation performance, movement behavior, and perception. Through this study, we aim to address gaps to advance the pragmatic development of assistive technologies (ATs) for persons with VI. MethodsThe VR platform was designed to resemble a subway station environment with the most common challenges faced by persons with VI during navigation. This environment was used to test our multisensory, AT-integrated VR platform among 72 healthy participants performing an obstacle avoidance task while experiencing symptoms of VI. Each participant performed the task 4 times: once with haptic feedback, once with audio feedback, once with both feedback types, and once without any feedback. Data analysis encompassed metrics such as completion time, head and body orientation, and trajectory length and smoothness. To evaluate the effectiveness and interaction of the 2 feedback modalities, we conducted a 2-way repeated measures ANOVA on continuous metrics and a Scheirer-Ray-Hare test on discrete ones. We also conducted a descriptive statistical analysis of participants’ answers to a questionnaire, assessing their experience and preference for feedback modalities. ResultsResults from our study showed that haptic feedback significantly reduced collisions (P=.05) and the variability of the pitch angle of the head (P=.02). Audio feedback improved trajectory smoothness (P=.006) and mitigated the increase in the trajectory length from haptic feedback alone (P=.04). Participants reported a high level of engagement during the experiment (52/72, 72%) and found it interesting (42/72, 58%). However, when it came to feedback preferences, less than half of the participants (29/72, 40%) favored combined feedback modalities. This indicates that a majority preferred dedicated single modalities over combined ones. ConclusionsAT is crucial for individuals with VI; however, it often lacks user-centered design principles. Research should prioritize consumer-oriented methodologies, testing devices in a staged manner with progression toward more realistic, ecologically valid settings to ensure safety. Our multisensory, AT-integrated VR system takes a holistic approach, offering a first step toward enhancing users’ spatial awareness, promoting safer mobility, and holds potential for applications in medical treatment, training, and rehabilitation. Technological advancements can further refine such devices, significantly improving independence and quality of life for those with VI.
Rahmati, Amir; Fernandes, Earlence; Eykholt, Kevin; Chen, Xinheng; Prakash, Atul
(, Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys))
Many of the everyday decisions a user makes rely on the suggestions of online recommendation systems. These systems amass implicit (e.g., location, purchase history, browsing history) and explicit (e.g., reviews, ratings) feedback from multiple users, produce a general consensus, and provide suggestions based on that consensus. However, due to privacy concerns, users are uncomfortable with implicit data collection, thus requiring recommendation systems to be overly dependent on explicit feedback. Unfortunately, users do not frequently provide explicit feedback. This hampers the ability of recommendation systems to provide high-quality suggestions. We introduce Heimdall, the first privacy-respecting implicit preference collection framework that enables recommendation systems to extract user preferences from their activities in a privacy respect- ing manner. The key insight is to enable recommendation systems to run a collector on a user’s device and precisely control the information a collector transmits to the recommendation system back- end. Heimdall introduces immutable blobs as a mechanism to guarantee this property. We implemented Heimdall on the Android plat- form and wrote three example collectors to enhance recommendation systems with implicit feedback. Our performance results suggest that the overhead of immutable blobs is minimal, and a user study of 166 participants indicates that privacy concerns are significantly less when collectors record only specific information—a property that Heimdall enables.
Lee, Kyungjun; Hong, Jonggi; Pimento, Simone; Jarjue, Ebrima; Kacorri, Hernisa
(, The 21st International ACM SIGACCESS Conference on Computers and Accessibility)
For people with visual impairments, photography is essential in identifying objects through remote sighted help and image recognition apps. This is especially the case for teachable object recognizers, where recognition models are trained on user's photos. Here, we propose real-time feedback for communicating the location of an object of interest in the camera frame. Our audio-haptic feedback is powered by a deep learning model that estimates the object center location based on its proximity to the user's hand. To evaluate our approach, we conducted a user study in the lab, where participants with visual impairments (N=9) used our feedback to train and test their object recognizer in vanilla and cluttered environments. We found that very few photos did not include the object (2% in the vanilla and 8% in the cluttered) and the recognition performance was promising even for participants with no prior camera experience. Participants tended to trust the feedback even though they know it can be wrong. Our cluster analysis indicates that better feedback is associated with photos that include the entire object. Our results provide insights into factors that can degrade feedback and recognition performance in teachable interfaces.
Peng, Bei; MacGlashan, James; Loftin, Robert; Littman, Michael L.; Roberts, David L.; Taylor, Matthew E.
(, AAMAS)
As robots become pervasive in human environments, it is important to enable users to effectively convey new skills without programming. Most existing work on Interactive Reinforcement Learning focuses on interpreting and incorporating non-expert human feedback to speed up learning; we aim to design a better representation of the learning agent that is able to elicit more natural and effective communication between the human trainer and the learner, while treating human feedback as discrete communication that depends probabilistically on the trainer’s target policy. This work entails a user study where participants train a virtual agent to accomplish tasks by giving reward and/or punishment in a variety of simulated environments. We present results from 60 participants to show how a learner can ground natural language commands and adapt its action execution speed to learn more efficiently from human trainers. The agent’s action execution speed can be successfully modulated to encourage more explicit feedback from a human trainer in areas of the state space where there is high uncertainty. Our results show that our novel adaptive speed agent dominates different fixed speed agents on several measures of performance. Additionally, we investigate the impact of instructions on user performance and user preference in training conditions.
Darmawaskita, Nicole, and McDaniel, Troy. Understanding Pattern Recognition Through Sound with Considerations for Developing Accessible Technologies. Retrieved from https://par.nsf.gov/biblio/10277359. Lecture notes in computer science 12426 LNCS. Web. doi:10.1007/978-3-030-60149-2_17.
@article{osti_10277359,
place = {Country unknown/Code not available},
title = {Understanding Pattern Recognition Through Sound with Considerations for Developing Accessible Technologies},
url = {https://par.nsf.gov/biblio/10277359},
DOI = {10.1007/978-3-030-60149-2_17},
abstractNote = {This work explores whether audio feedback style and user ability influences user techniques, performance, and preference in the interpretation of node graph data among sighted individuals and those who are blind or visually impaired. This study utilized a posttest-only basic randomized design comparing two treatments, in which participants listened to short audio clips describing a sequence of transitions occurring in a node graph. The results found that participants tend to use certain techniques and have corresponding preferences based on their ability. A correlation was also found between equivalently high feedback design performance and lack of overall feedback design preference. These results imply that universal technologies should consider avoiding utilizing design constraints that allow for only one optimal usage technique, especially if that technique is dependent on a user’s ability.},
journal = {Lecture notes in computer science},
volume = {12426 LNCS},
author = {Darmawaskita, Nicole and McDaniel, Troy},
editor = {null and null and null and null}
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.