skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM to 12:00 PM ET on Tuesday, March 25 due to maintenance. We apologize for the inconvenience.


Title: Privacy Concerns for Visual Assistance Technologies
People who are blind share their images and videos with companies that provide visual assistance technologies (VATs) to gain access to information about their surroundings. A challenge is that people who are blind cannot independently validate the content of the images and videos before they share them, and their visual data commonly contains private content. We examine privacy concerns for blind people who share personal visual data with VAT companies that provide descriptions authored by humans or artifcial intelligence (AI) . We frst interviewed 18 people who are blind about their perceptions of privacy when using both types of VATs. Then we asked the participants to rate 21 types of image content according to their level of privacy concern if the information was shared knowingly versus unknowingly with human- or AI-powered VATs. Finally, we analyzed what information VAT companies communicate to users about their collection and processing of users’ personal visual data through their privacy policies. Our fndings have implications for the development of VATs that safeguard blind users’ visual privacy, and our methods may be useful for other camera-based technology companies and their users.  more » « less
Award ID(s):
2125925
PAR ID:
10350252
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
ACM Transactions on Accessible Computing
Volume:
15
Issue:
2
ISSN:
1936-7228
Page Range / eLocation ID:
1 to 43
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In the last decade, there has been a surge in development and mainstream adoption of Artificial Intelligence (AI) systems that can generate textual image descriptions from images. However, only a few of these, such as Microsoft’s SeeingAI, are specifically tailored to needs of people who are blind screen reader users, and none of these have been brought to bear on the particular challenges faced by parents who desire image descriptions of children’s picture books. Such images have distinct qualities, but there exists no research to explore the current state of the art and opportunities to improve image-to-text AI systems for this problem domain. We conducted a content analysis of the image descriptions generated for a sample of 20 images selected from 17 recently published children’s picture books, using five AI systems: asticaVision, BLIP, SeeingAI, TapTapSee, and VertexAI. We found that descriptions varied widely in their accuracy and completeness, with only 13% meeting both criteria. Overall, our findings suggest a need for AI image-to-text generation systems that are trained on the types, contents, styles, and layouts characteristic of children’s picture book images, towards increased accessibility for blind parents. 
    more » « less
  2. As technology is advancing, accessibility is also taken care of seriously. Many users with visual disabilities take advantage of, for example, Microsoft's Seeing AI application (app) that is equipped with artificial intelligence. The app helps people with visual disabilities to recognize objects, people, texts, and many more via a smartphone's built-in camera. As users may use the app in recognizing personally identifiable information, user privacy should carefully be treated and considered as a top priority. Yet, little is known about the user privacy issues among users with visual disabilities, such that this study aims to address the knowledge gap by conducting a questionnaire with the Seeing AI users with visual disabilities. This study found that those with visual disabilities had a lack of knowledge about user privacy policies. It is recommended to offer an adequate educational training; thus, those with visual disabilities can be well informed of user privacy policies, ultimately leading to promoting safe online behavior to protect themselves from digital privacy and security problems. 
    more » « less
  3. null (Ed.)
    Systems that augment sensory abilities are increasingly employing AI and machine learning (ML) approaches, with applications ranging from object recognition and scene description tools for blind users to sound awareness tools for d/Deaf users. However, unlike many other AI-enabled technologies these systems provide information that is already available to non-disabled people. In this paper, we discuss unique AI fairness challenges that arise in this context, including accessibility issues with data and models, ethical implications in deciding what sensory information to convey to the user, and privacy concerns both for the primary user and for others. 
    more » « less
  4. The prevalence of smartphones in our society warrants more research on understanding the characteristics of users and their information privacy behaviors when using mobile apps. This paper investigates the antecedents and consequences of “power use” (i.e., the competence and desire to use technology to its fullest) in the context of informational privacy. In a study with 380 Android users, we examined how gender and users’ education level influence power use, how power use affects users’ intention to install apps and share information with them versus their actual privacy behaviors (i.e., based on the number of apps installed and the total number of “dangerous permission” requests granted to those apps). Our findings revealed an inconsistency in the effect of power use on users’ information privacy behaviors: While the intention to install apps and to share information with them increased with power use, the actual number of installed apps and dangerous permissions ultimately granted decreased with power use. In other words, although the self-reported intentions suggested the opposite, people who scored higher on the power use scale seemed to be more prudent about their informational privacy than people who scored lower on the power use scale. We discuss the implications of this inconsistency and make recommendations for reconciling smartphone users’ informational privacy intentions and behaviors. 
    more » « less
  5. This project illuminates what data youth believe online advertisers and social media companies collect about them. We situate these findings within the context of current advertising regulations and compare youth beliefs with what data social media companies report collecting based on their privacy policies. Through interviews with 21 youth ages 10-17 in the United States, we learn that participants are largely aware of how their interactions on the website or app are used to inform personalized content. However, certain types of information like geolocation or how long data is retained is less clear to them. We also learn about what school and family factors influence youth to adopt apps and websites. This work has implications for design and policy related to companies' personal data collection and targeted advertising, especially for youth. 
    more » « less