As technology is advancing, accessibility is also taken care of seriously. Many users with visual disabilities take advantage of, for example, Microsoft's Seeing AI application (app) that is equipped with artificial intelligence. The app helps people with visual disabilities to recognize objects, people, texts, and many more via a smartphone's built-in camera. As users may use the app in recognizing personally identifiable information, user privacy should carefully be treated and considered as a top priority. Yet, little is known about the user privacy issues among users with visual disabilities, such that this study aims to address the knowledge gap by conducting a questionnaire with the Seeing AI users with visual disabilities. This study found that those with visual disabilities had a lack of knowledge about user privacy policies. It is recommended to offer an adequate educational training; thus, those with visual disabilities can be well informed of user privacy policies, ultimately leading to promoting safe online behavior to protect themselves from digital privacy and security problems.
more »
« less
Usability Assessment of Voice-Enabled Technologies for Users with Visual Disabilities
Voice-enabled technologies such as VoiceOver (screen reader) and the Seeing AI app (image recognition) have revolutionized daily tasks for people with visual disabilities, fostering greater independence and information access. However, a gap remains in understanding the user experience (UX) of these technologies. This study investigated how those with visual disabilities interacted with VoiceOver and the Seeing AI app. A convenience sample of eight participants with visual disabilities engaged in direct observations while using these technologies. The study utilized the System Usability Scale (SUS) to assess perceived usability and analyzed findings using descriptive statistics. Results indicated a poorer UX with VoiceOver compared to the Seeing AI app, with challenges identified in graphical user interfaces (GUIs), voice and gesture commands. Relevant recommendations were made to enhance usability. The study emphasizes the need for more intuitive GUIs and optimized voice/gesture interactions for users with visual disabilities.
more »
« less
- Award ID(s):
- 1831969
- PAR ID:
- 10532595
- Publisher / Repository:
- SAGE Publications
- Date Published:
- Journal Name:
- Proceedings of the Human Factors and Ergonomics Society Annual Meeting
- Volume:
- 68
- Issue:
- 1
- ISSN:
- 1071-1813
- Format(s):
- Medium: X Size: p. 1852-1854
- Size(s):
- p. 1852-1854
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This study investigates how individuals with visual disabilities and their sighted counterparts perceive user experiences with smart speakers. A sample of 79 participants, including 41 with visual disabilities and 38 sighted individuals, used Amazon Echo 4th Gen smart speakers. After participants used the smart speakers for one week in their daily lives, exit interviews were administered and analyzed, yielding themes of accessibility, effectiveness, enjoyment, efficiency, and privacy. Findings revealed that the voice user interfaces of smart speakers significantly enhanced accessibility and user satisfaction for those with visual disabilities, while the voice assistant Alexa contributed to fostering emotional connections. Sighted participants, while benefiting from the smart speaker's multifunctionality and efficiency, faced challenges with initial setup and advanced features. Individuals with visual disabilities raised privacy concerns. This study underscores the need for inclusive design improvements to address the diverse needs of all users. To improve user experience, future enhancements should focus on refining voice command accuracy, integrating predictive features, optimizing onboarding processes, and strengthening privacy controls.more » « less
-
null (Ed.)We summarize our past five years of work on designing, building, and studying Sugilite, an interactive task learning agent that can learn new tasks and relevant associated concepts interactively from the user’s natural language instructions and demonstrations leveraging the graphical user interfaces (GUIs) of third-party mobile apps. Through its multi-modal and mixed-initiative approaches for Human- AI interaction, Sugilite made important contributions in improving the usability, applicability, generalizability, flexibility, robustness, and shareability of interactive task learning agents. Sugilite also represents a new human-AI interaction paradigm for interactive task learning, where it uses existing app GUIs as a medium for users to communicate their intents with an AI agent instead of the interfaces for users to interact with the underlying computing services. In this chapter, we describe the Sugilite system, explain the design and implementation of its key features, and show a prototype in the form of a conversational assistant on Android.more » « less
-
Li, Yang; Hilliges, Otmar (Ed.)We summarize our past five years of work on designing, building, and studying Sugilite, an interactive task learning agent that can learn new tasks and relevant associated concepts interactively from the user’s natural language instructions and demonstrations leveraging the graphical user interfaces (GUIs) of third-party mobile apps. Through its multi-modal and mixed-initiative approaches for Human-AI interaction, Sugilite made important contributions in improving the usability, applicability, generalizability, flexibility, robustness, and shareability of interactive task learning agents. Sugilite also represents a new human-AI interaction paradigm for interactive task learning, where it uses existing app GUIs as a medium for users to communicate their intents with an AI agent instead of the interfaces for users to interact with the underlying computing services. In this chapter, we describe the Sugilite system, explain the design and implementation of its key features, and show a prototype in the form of a conversational assistant on Android.more » « less
-
BackgroundReminiscence, a therapy that uses stimulating materials such as old photos and videos to stimulate long-term memory, can improve the emotional well-being and life satisfaction of older adults, including those who are cognitively intact. However, providing personalized reminiscence therapy can be challenging for caregivers and family members. ObjectiveThis study aimed to achieve three objectives: (1) design and develop the GoodTimes app, an interactive multimodal photo album that uses artificial intelligence (AI) to engage users in personalized conversations and storytelling about their pictures, encompassing family, friends, and special moments; (2) examine the app’s functionalities in various scenarios using use-case studies and assess the app’s usability and user experience through the user study; and (3) investigate the app’s potential as a supplementary tool for reminiscence therapy among cognitively intact older adults, aiming to enhance their psychological well-being by facilitating the recollection of past experiences. MethodsWe used state-of-the-art AI technologies, including image recognition, natural language processing, knowledge graph, logic, and machine learning, to develop GoodTimes. First, we constructed a comprehensive knowledge graph that models the information required for effective communication, including photos, people, locations, time, and stories related to the photos. Next, we developed a voice assistant that interacts with users by leveraging the knowledge graph and machine learning techniques. Then, we created various use cases to examine the functions of the system in different scenarios. Finally, to evaluate GoodTimes’ usability, we conducted a study with older adults (N=13; age range 58-84, mean 65.8 years). The study period started from January to March 2023. ResultsThe use-case tests demonstrated the performance of GoodTimes in handling a variety of scenarios, highlighting its versatility and adaptability. For the user study, the feedback from our participants was highly positive, with 92% (12/13) reporting a positive experience conversing with GoodTimes. All participants mentioned that the app invoked pleasant memories and aided in recollecting loved ones, resulting in a sense of happiness for the majority (11/13, 85%). Additionally, a significant majority found GoodTimes to be helpful (11/13, 85%) and user-friendly (12/13, 92%). Most participants (9/13, 69%) expressed a desire to use the app frequently, although some (4/13, 31%) indicated a need for technical support to navigate the system effectively. ConclusionsOur AI-based interactive photo album, GoodTimes, was able to engage users in browsing their photos and conversing about them. Preliminary evidence supports GoodTimes’ usability and benefits cognitively intact older adults. Future work is needed to explore its potential positive effects among older adults with cognitive impairment.more » « less
An official website of the United States government
