skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Investigating Practices and Opportunities for Cross-functional Collaboration around AI Fairness in Industry Practice
An emerging body of research indicates that ineffective cross-functional collaboration – the interdisciplinary work done by industry practitioners across roles – represents a major barrier to addressing issues of fairness in AI design and development. In this research, we sought to better understand practitioners’ current practices and tactics to enact cross-functional collaboration for AI fairness, in order to identify opportunities to support more effective collaboration. We conducted a series of interviews and design workshops with 23 industry practitioners spanning various roles from 17 companies. We found that practitioners engaged in bridging work to overcome frictions in understanding, contextualization, and evaluation around AI fairness across roles. In addition, in organizational contexts with a lack of resources and incentives for fairness work, practitioners often piggybacked on existing requirements (e.g., for privacy assessments) and AI development norms (e.g., the use of quantitative evaluation metrics), although they worry that these tactics may be fundamentally compromised. Finally, we draw attention to the invisible labor that practitioners take on as part of this bridging and piggybacking work to enact interdisciplinary collaboration for fairness. We close by discussing opportunities for both FAccT researchers and AI practitioners to better support cross-functional collaboration for fairness in the design and development of AI systems.  more » « less
Award ID(s):
2040942
PAR ID:
10573771
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
ISBN:
9798400701924
Page Range / eLocation ID:
705 to 716
Format(s):
Medium: X
Location:
Chicago IL USA
Sponsoring Org:
National Science Foundation
More Like this
  1. The development of Artificial Intelligence (AI) systems involves a significant level of judgment and decision making on the part of engineers and designers to ensure the safety, robustness, and ethical design of such systems. However, the kinds of judgments that practitioners employ while developing AI platforms are rarely foregrounded or examined to explore areas practitioners might need ethical support. In this short paper, we employ the concept of design judgment to foreground and examine the kinds of sensemaking software engineers use to inform their decisionmaking while developing AI systems. Relying on data generated from two exploratory observation studies of student software engineers, we connect the concept of fairness to the foregrounded judgments to implicate their potential algorithmic fairness impacts. Our findings surface some ways in which the design judgment of software engineers could adversely impact the downstream goal of ensuring fairness in AI systems. We discuss the implications of these findings in fostering positive innovation and enhancing fairness in AI systems, drawing attention to the need to provide ethical guidance, support, or intervention to practitioners as they engage in situated and contextual judgments while developing AI systems. 
    more » « less
  2. Background The rapid advancement of artificial intelligence (AI) is reshaping industrial workflows and workforce expectations. After its breakthrough year in 2023, AI has become ubiquitous, yet no standardized approach exists for integrating AI into engineering and computer science undergraduate curricula. Recent graduates find them- selves navigating evolving industry demands surrounding AI, often without formal preparation. The ways in which AI impacts their career decisions represent a critical perspective to support future students as graduates enter AI-friendly industries. Our work uses social cognitive career theory (SCCT) to qualitatively investigate how 14 recent engineering graduates working in a variety of industry sectors perceived the impact of AI on their careers and industries. Results Given the rapid and ongoing evolution of AI, findings suggested that SCCT may have limited applicability until AI technology has matured further. Many recent graduates lacked prior exposure to or a clear understanding of AI and its relevance to their professional roles. The timing of direct, practical exposure to AI emerged as a key influ- ence on how participants perceived AI’s impact on their career decisions. Participants emphasized a need for more customizable undergraduate curricula to align with industry trends and individual interests related to AI. While many acknowledged AI’s potential to enhance efficiency in data management and routine administrative tasks, they largely did not perceive AI as a direct threat to their core engineering functions. Instead, AI was viewed as a supplemen- tal tool requiring critical oversight. Despite interest in AI’s potential, most participants lacked the time or resources to independently pursue integrating AI into their professional roles. Broader concerns included ethical considerations, industry regulations, and the rapid pace of AI development. Conclusions This exploratory work highlights an urgent need for collaboration between higher education and industry leaders to more effectively integrate direct, hands-on experience with AI into engineering education. A personalized, context-driven approach to teaching AI that emphasizes ethical considerations and domain-specific applications would help better prepare students for evolving workforce expectations by highlighting AI’s relevance and limitations. This alignment would support more meaningful engagement with AI and empower future engineers to apply it responsibly and effectively in their fields. 
    more » « less
  3. Abstract Design artifacts provide a mechanism for illustrating design information and concepts, but their effectiveness relies on alignment across design agents in what these artifacts represent. This work investigates the agreement between multi-modal representations of design artifacts by humans and artificial intelligence (AI). Design artifacts are considered to constitute stimuli designers interact with to become inspired (i.e., inspirational stimuli), for which retrieval often relies on computational methods using AI. To facilitate this process for multi-modal stimuli, a better understanding of human perspectives of non-semantic representations of design information, e.g., by form or function-based features, is motivated. This work compares and evaluates human and AI-based representations of 3D-model parts by visual and functional features. Humans and AI were found to share consistent representations of visual and functional similarities, which aligned well with coarse, but not more granular, levels of similarity. Human–AI alignment was higher for identifying low compared to high similarity parts, suggesting mutual representation of features underlying more obvious than nuanced differences. Human evaluation of part relationships in terms of belonging to the same or different categories revealed that human and AI-derived relationships similarly reflect concepts of “near” and “far.” However, levels of similarity corresponding to “near” and “far” differed depending on the criteria evaluated, where “far” was associated with nearer visually than functionally related stimuli. These findings contribute to a fundamental understanding of human evaluation of information conveyed by AI-represented design artifacts needed for successful human–AI collaboration in design. 
    more » « less
  4. How do practitioners who develop consumer AI products scope, motivate, and conduct privacy work? Respecting pri- vacy is a key principle for developing ethical, human-centered AI systems, but we cannot hope to better support practitioners without answers to that question. We interviewed 35 industry AI practitioners to bridge that gap. We found that practitioners viewed privacy as actions taken against pre-defined intrusions that can be exacerbated by the capabilities and requirements of AI, but few were aware of AI-specific privacy intrusions documented in prior literature. We found that their privacy work was rigidly defined and situated, guided by compliance with privacy regulations and policies, and generally demoti- vated beyond meeting minimum requirements. Finally, we found that the methods, tools, and resources they used in their privacy work generally did not help address the unique pri- vacy risks introduced or exacerbated by their use of AI in their products. Collectively, these findings reveal the need and opportunity to create tools, resources, and support structures to improve practitioners’ awareness of AI-specific privacy risks, motivations to do AI privacy work, and ability to ad- dress privacy harms introduced or exacerbated by their use of AI in consumer products. 
    more » « less
  5. When their child is hospitalized, parents take on new caregiving roles, in addition to their existing home and work-related responsibilities. Previous CSCW research has shown how technologies can support caregiving, but more research is needed to systematically understand how technology could support parents and other family caregivers as they adopt new coordination roles in their collaborations with each other. This paper reports findings from an interview study with parents of children hospitalized for cancer treatment. We used the Role Theory framework from the social sciences to show how parents adopt and enact caregiving roles during hospitalization and the challenges they experience as they adapt to this stressful situation. We show how parents experience 'role strain' as they attempt to divide caregiving work and introduce the concept of 'inter-caregiver information disparity.' We propose design opportunities for caregiving coordination technologies to better support caregiving roles in multi-caregiver teams. 
    more » « less