skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Do Users Act Equitably? Understanding User Bias Through a Large In-person Study
Inequitable software is a common problem. Bias may be caused by developers, or even software users. As a society, it is crucial that we understand and identify the causes and implications of software bias from both users and the software itself. To address the problems of inequitable software, it is essential that we inform and motivate the next generation of software developers regarding bias and its adverse impacts. However, research shows that there is a lack of easily adoptable ethics-focused educational material to support this effort.To address the problem of inequitable software, we created an easily adoptable, self-contained experiential activity that is designed to foster student interest in software ethics, with a specific emphasis on AI/ML bias. This activity involves participants selecting fictitious teammates based solely on their appearance. The participant then experiences bias either against themselves or a teammate by the activity’s fictitious AI. The created lab was then utilized in this study involving 173 real-world users (age 18-51+) to better understand user bias.The primary findings of our study include: I) Participants from minority ethnic groups have stronger feeling regarding being impacted by inequitable software/AI, II) Participants with higher interest in AI/ML have a higher belief for the priority of unbiased software, III) Users do not act in an equitable manner, as avatars with ‘dark’ skin color are less likely to be selected, and IV) Participants from different demographic groups exhibit similar behavior bias. The created experiential lab activity may be executed using only a browser and internet connection, and is publicly available on our project website: https://all.rit.edu.  more » « less
Award ID(s):
2145010
PAR ID:
10459882
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2023 IEEE/ACM 45th International Conference on Software Engineering: Software Engineering in Society (ICSE-SEIS)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Studies indicate that much of the software created today is not accessible to all users, indicating that developers don’t see the need to devote sufficient resources to creating accessible software. Compounding this problem, there is a lack of robust, easily adoptable educational accessibility material available to instructors for inclusion in their curricula. To address these issues, we have created five Accessibility Learning Labs (ALL) using an experiential learning structure. The labs are designed to educate and create awareness of accessibility needs in computing. The labs enable easy classroom integration by providing instructors with complete educational materials including lecture slides, activities, and quizzes. The labs are hosted on our servers and require only a browser to be utilized. To demonstrate the benefit of our material and the potential benefits of our experiential lab format with empathy-creating material, we conducted a study involving 276 students in ten sections of an introductory computing course. Our findings include: (I) The demonstrated potential of the proposed experiential learning format and labs are effective in motivating and educating students about the importance of accessibility (II) The labs are effective in informing students about foundational accessibility topics (III) Empathy-creating material is demonstrated to be a beneficial component in computing accessibility education, supporting students in placing a higher value on the importance of creating accessible software. Created labs and project materials are publicly available on the project website: http://all.rit.edu 
    more » « less
  2. The rapid adoption of generative AI in software development has impacted the industry, yet its efects on developers with visual impairments remain largely unexplored. To address this gap, we used an Activity Theory framework to examine how developers with visual impairments interact with AI coding assistants. For this purpose, we conducted a study where developers who are visually impaired completed a series of programming tasks using a generative AI coding assistant. We uncovered that, while participants found the AI assistant benefcial and reported signifcant advantages, they also highlighted accessibility challenges. Specifcally, the AI coding assistant often exacerbated existing accessibility barriers and introduced new challenges. For example, it overwhelmed users with an excessive number of suggestions, leading developers who are visually impaired to express a desire for “AI timeouts.” Additionally, the generative AI coding assistant made it more difcult for developers to switch contexts between the AI-generated content and their own code. Despite these challenges, participants were optimistic about the potential of AI coding assistants to transform the coding experience for developers with visual impairments. Our fndings emphasize the need to apply activity-centered design principles to generative AI assistants, ensuring they better align with user behaviors and address specifc accessibility needs. This approach can enable the assistants to provide more intuitive, inclusive, and efective experiences, while also contributing to the broader goal of enhancing accessibility in software development 
    more » « less
  3. The recent surge in artificial intelligence (AI) developments has been met with an increase in attention towards incorporating ethical engagement in machine learning discourse and development. This attention is noticeable within engineering education, where comprehensive ethics curricula are typically absent in engineering programs that train future engineers to develop AI technologies [1]. Artificial intelligence technologies operate as black boxes, presenting both developers and users with a certain level of obscurity concerning their decision-making processes and a diminished potential for negotiating with its outputs [2]. The implementation of collaborative and reflective learning has the potential to engage students with facets of ethical awareness that go along with algorithmic decision making – such as bias, security, transparency and other ethical and moral dilemmas. However, there are few studies that examine how students learn AI ethics in electrical and computer engineering courses. This paper explores the integration of STEMtelling, a pedagogical storytelling method/sensibility, into an undergraduate machine learning course. STEMtelling is a novel approach that invites participants (STEMtellers) to center their own interests and experiences through writing and sharing engineering stories (STEMtells) that are connected to course objectives. Employing a case study approach grounded in activity theory, we explore how students learn ethical awareness that is intrinsic to being an engineer. During the STEMtelling process, STEMtellers blur the boundaries between social and technical knowledge to place themselves at the center of knowledge production. In this WIP, we discuss algorithmic awareness, as one of the themes identified as a practice in developing ethical awareness of AI through STEMtelling. Findings from this study will be incorporated into the development of STEMtelling and address challenges of integrating ethics and the social perception of AI and machine learning courses. 
    more » « less
  4. This tutorial will introduce our Accessibility Learning Labs (ALL). The objectives of this collaborative project with The National Technical Institute for the Deaf (NTID) are to both inform participants about foundational topics in accessibility and to demonstrate the importance of creating accessible software. The labs enable easy classroom inclusion by providing instructors all necessary materials including lecture and activity slides and videos. Each lab addresses an accessibility issue and contains: I) Relevant background information on the examined issue II) An example web-based application containing the accessibility problem III) A process to emulate this accessibility problem IV) Details about how to repair the problem from a technical perspective V) Incidents from people who encountered this accessibility issue and how it has impacted their life. The labs may be easily integrated into a wide variety of curriculum at high schools (9-12), and in undergraduate and graduate courses. The labs will be easily adoptable due to their selfcontained nature and their inclusion of all necessary instructional material (e.g., slides, quizzes, etc.). No special software is required to use any portion of the labs since they are web-based and are able to run on any computer with a reasonably recent web browser. There are currently four available labs on the topics of: Colorblindness, Hearing, Blindness and Dexterity. Material is available on our website: http://all.rit.edu This tutorial will provide an overview of the created labs and usage instructions and information for adaptors. 
    more » « less
  5. This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions in machine learning (ML) that draw from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society’s most marginalized. The article is organized according to nine major themes of critique wherein these different fields intersect: 1) how "fairness" in AI fairness research gets defined; 2) how problems for AI systems to address get formulated; 3) the impacts of abstraction on how AI tools function and its propensity to lead to technological solutionism; 4) how racial classification operates within AI fairness research; 5) the use of AI fairness measures to avoid regulation and engage in ethics washing; 6) an absence of participatory design and democratic deliberation in AI fairness considerations; 7) data collection practices that entrench “bias,” are non-consensual, and lack transparency; 8) the predatory inclusion of marginalized groups into AI systems; and 9) a lack of engagement with AI’s long-term social and ethical outcomes. Drawing from these critiques, the article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society. 
    more » « less