skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Fairness Issues in AI Systems that Augment Sensory Abilities
Systems that augment sensory abilities are increasingly employing AI and machine learning (ML) approaches, with applications ranging from object recognition and scene description tools for blind users to sound awareness tools for d/Deaf users. However, unlike many other AI-enabled technologies these systems provide information that is already available to non-disabled people. In this paper, we discuss unique AI fairness challenges that arise in this context, including accessibility issues with data and models, ethical implications in deciding what sensory information to convey to the user, and privacy concerns both for the primary user and for others.  more » « less
Award ID(s):
1704527
PAR ID:
10244751
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Workshop on AI Fairness for People with Disabilities
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Automated sound recognition tools can be a useful complement to d/Deaf and hard of hearing (DHH) people's typical communication and environmental awareness strategies. Pre-trained sound recognition models, however, may not meet the diverse needs of individual DHH users. While approaches from human-centered machine learning can enable non-expert users to build their own automated systems, end-user ML solutions that augment human sensory abilities present a unique challenge for users who have sensory disabilities: how can a DHH user, who has difficulty hearing a sound themselves, effectively record samples to train an ML system to recognize that sound? To better understand how DHH users can drive personalization of their own assistive sound recognition tools, we conducted a three-part study with 14 DHH participants: (1) an initial interview and demo of a personalizable sound recognizer, (2) a week-long field study of in situ recording, and (3) a follow-up interview and ideation session. Our results highlight a positive subjective experience when recording and interpreting training data in situ, but we uncover several key pitfalls unique to DHH users---such as inhibited judgement of representative samples due to limited audiological experience. We share implications of these results for the design of recording interfaces and human-the-the-loop systems that can support DHH users to build sound recognizers for their personal needs. 
    more » « less
  2. Generative AI, particularly Large Language Models (LLMs), has revolutionized human-computer interaction by enabling the generation of nuanced, human-like text. This presents new opportunities, especially in enhancing explainability for AI systems like recommender systems, a crucial factor for fostering user trust and engagement. LLM-powered AI-Chatbots can be leveraged to provide personalized explanations for recommendations. Although users often find these chatbot explanations helpful, they may not fully comprehend the content. Our research focuses on assessing how well users comprehend these explanations and identifying gaps in understanding. We also explore the key behavioral differences between users who effectively understand AI-generated explanations and those who do not. We designed a three-phase user study with 17 participants to explore these dynamics. The findings indicate that the clarity and usefulness of the explanations are contingent on the user asking relevant follow-up questions and having a motivation to learn. Comprehension also varies significantly based on users’ educational backgrounds. 
    more » « less
  3. Remote Patient Monitoring (RPM) devices transmit patients' medical indicators (e.g., blood pressure) from the patient's home testing equipment to their healthcare providers, in order to monitor chronic conditions such as hypertension. AI systems have the potential to enhance access to timely medical advice based on the data that RPM devices produce. In this paper, we report on three studies investigating how the severity of users' medical condition (normal vs. high blood pressure), security risk (low vs. modest vs. high risk), and medical advice source (human doctor vs. AI) influence user perceptions of advisor trustworthiness and willingness to disclose RPM-acquired information. We found that trust mediated the relationship between the advice source and users' willingness to disclose health information: users trust doctors more than AI and are more willing to disclose their RPM-acquired health information to a more trusted advice source. However, we unexpectedly discovered that conditional on trust, users disclose RPM-acquired information more readily to AI than to doctors. We observed that the advice source did not influence perceptions of security and privacy risks. We conclude by discussing how our findings can support the design of RPM applications. 
    more » « less
  4. As technology is advancing, accessibility is also taken care of seriously. Many users with visual disabilities take advantage of, for example, Microsoft's Seeing AI application (app) that is equipped with artificial intelligence. The app helps people with visual disabilities to recognize objects, people, texts, and many more via a smartphone's built-in camera. As users may use the app in recognizing personally identifiable information, user privacy should carefully be treated and considered as a top priority. Yet, little is known about the user privacy issues among users with visual disabilities, such that this study aims to address the knowledge gap by conducting a questionnaire with the Seeing AI users with visual disabilities. This study found that those with visual disabilities had a lack of knowledge about user privacy policies. It is recommended to offer an adequate educational training; thus, those with visual disabilities can be well informed of user privacy policies, ultimately leading to promoting safe online behavior to protect themselves from digital privacy and security problems. 
    more » « less
  5. This paper introduces an innovative approach to recommender systems through the development of an explainable architecture that leverages large language models (LLMs) and prompt engineering to provide natural language explanations. Traditional recommender systems often fall short in offering personalized, transparent explanations, particularly for users with varying levels of digital literacy. Focusing on the Advisor Recommender System, our proposed system integrates the conversational capabilities of modern AI to deliver clear, context-aware explanations for its recommendations. This research addresses key questions regarding the incorporation of LLMs into social recommender systems, the impact of natural language explanations on user perception, and the specific informational needs users prioritize in such interactions. A pilot study with 11 participants reveals insights into the system’s usability and the effectiveness of explanation clarity. Our study contributes to the broader human-AI interaction literature by outlining a novel system architecture, identifying user interaction patterns, and suggesting directions for future enhancements to improve decision-making processes in AI-driven recommendations. 
    more » « less