The workplace has experienced extensive digital transformation, in part due to artificial intelligence's commercial availability. Though still an emerging technology, emotional artificial intelligence (EAI) is increasingly incorporated into enterprise systems to augment and automate organizational decisions and to monitor and manage workers. EAI use is often celebrated for its potential to improve workers' wellbeing and performance as well as address organizational problems such as bias and safety. Workers subject to EAI in the workplace are data subjects whose data make EAI possible and who are most impacted by it. However, we lack empirical knowledge about data subjects' perspectives on EAI, including in the workplace. To this end, using a relational ethics lens, we qualitatively analyzed 395 U.S. adults' open-ended survey (partly representative) responses regarding the perceived benefits and risks they associate with being subjected to EAI in the workplace. While participants acknowledged potential benefits of being subject to EAI (e.g., employers using EAI to aid their wellbeing, enhance their work environment, reduce bias), a myriad of potential risks overshadowed perceptions of potential benefits. Participants expressed concerns regarding the potential for EAI use to harm their wellbeing, work environment and employment status, and create and amplify bias and stigma against them, especially the most marginalized (e.g., along dimensions of race, gender, mental health status, disability). Distrustful of EAI and its potential risks, participants anticipated conforming to (e.g., partaking in emotional labor) or refusing (e.g., quitting a job) EAI implementation in practice. We argue that EAI may magnify, rather than alleviate, existing challenges data subjects face in the workplace and suggest that some EAI-inflicted harms would persist even if concerns of EAI's accuracy and bias are addressed.
more »
« less
This content will become publicly available on July 1, 2026
Artificial Intelligence at Work: An Integrative Perspective on the Impact of AI on Workplace Inequality
The nexus between technology and workplace inequality has been a long-standing topic of scholarly interest, now heightened by the rapid evolution of artificial intelligence (AI). Our review moves beyond dystopian or utopian views of AI by identifying four perspectives—normative, cognitive, structural, and relational—espoused by scholars examining the impact of AI on workplace inequality specifically, and the structure and organization of work more broadly. We discuss the respective strengths, limitations, and underlying assumptions of these perspectives and highlight how each perspective speaks to a particular facet of workplace inequality: either encoded, evaluative, wage, or relational inequality. Integrating these perspectives enables a deeper understanding of the mechanisms, processes, and trajectories through which AI influences workplace inequality, as well as the role that organizational managers, workers, and policymakers could play in the process. Toward this end, we introduce a framework on the “inequality cascades” of AI that traces how and when inequality emerges and amplifies cumulatively as AI systems progress through the phases of development, implementation, and use in organizations. In turn, we articulate a research agenda for management and organizational scholars to better understand AI and its multifaceted impact on workplace inequality, and we examine potential mechanisms to mitigate its adverse consequences.
more »
« less
- Award ID(s):
- 2239538
- PAR ID:
- 10634009
- Publisher / Repository:
- Academy of Management Annals
- Date Published:
- Journal Name:
- Academy of Management Annals
- Volume:
- 19
- Issue:
- 2
- ISSN:
- 1941-6520
- Page Range / eLocation ID:
- 693 to 735
- Subject(s) / Keyword(s):
- artificial intelligence workplace automation augmentation organizations economy large language models
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Undergraduate education in the US is racially/ethnically stratified, and there is limited mobility for Black and Latinx BS recipients in STEM majors into the PhD programs from which faculty hiring disproportionately occurs. Bridge programs are proliferating as a means of increasing minoritized students’ enrollment in STEM graduate programs, but little social science examines mechanisms of their impact or how impacts depend on the graduate programs to which students seek access. This sequential mixed methods study of the Cal-Bridge program analyzed trust networks and mechanisms of relational trust as factors in graduate school application, admissions, and enrollment decisions. First, using social network analysis, we examined patterns in the graduate programs to which seven cohorts of Cal-Bridge scholars applied, were admitted, and chose to enroll. Then, we conducted an in-depth case study of the organization in the Cal-Bridge network with the highest centrality: University of California, Irvine’s physics and astronomy PhD program. We find the positive admission and enrollment outcomes at UC Irvine were due to intentional, institutional change at multiple organizational levels. Change efforts complemented the activities of the Cal-Bridge program, creating conditions that cultivated lived experiences of mutual, relational trust between bridge scholars and their faculty advisors and mentors. Findings illustrate mechanisms and antecedents of trust in the transition to graduate education. We use these findings to propose a framework that may inform the design of future research and practical efforts to account for the role of trust in inequities and creating more equitable cultures in STEM.more » « less
-
Abstract Artificial intelligence (AI) has long held the promise of imitating, replacing, or even surpassing human intelligence. Now that the abilities of AI systems have started to approach this initial aspiration, organization and management scholars face a challenge in how to theorize this technology, which potentially changes the way we view technology: not as a tool, but as something that enters previously human‐only domains. To navigate this theorizing challenge, we adopt the problematizing review method by engaging in a selective and critical reading of the theoretical contributions regarding AI, in the most influential organization and management journals. We examine how the literature has grounded itself with AI as the root metaphor and what field assumptions about AI are shared – or contested – in the field. We uncover two core assumptions of rationality and anthropomorphism, around which fruitful debates are already emerging. We discuss these two assumptions and their organizational boundary conditions in the context of theorizing AI. Finally, we invite scholars to build distinctive organization and management theory scaffolding within the broader social science of AI.more » « less
-
null (Ed.)his research examines supply chain collaboration effects on organizational performance in global value chain (GVC) infrastructure by focusing on GVC disaggregation, market turbulence, inequality, market globalization, product diversity, exploitation, and technological breakthroughs. The research strives to develop a better understanding of global value chains through relational view, behavioral, and contingency theories along with institutional and stakeholder theories of supply chains. Based on conflicting insights from these theories, this research investigates how relationships and operational outcomes of collaboration fare when market turbulence is present. Data is obtained and analyzed from focal firms that are engaged in doing business in emerging markets (e.g., India), and headquartered in the United States. We investigate relational outcomes (e.g., trust, credibility, mutual respect, and relationship commitment) among supply chain partners, and found that these relational outcomes result in better operational outcomes (e.g., profitability, market share increase, revenue generation, etc.). From managerial standpoint, supply chain managers should focus on relational outcomes that can strengthen operational outcomes in GVCs resulting in stronger organizational performance. The research offers valuable insights for theory and practice of global value chains by focusing on the GVC disaggregation through the measurement of market turbulence, playing a key role in the success of collaborative buyer–supplier relationships (with a focus on US companies doing business in India) leading to an overall improved firm performance.more » « less
-
The mental health crisis in the United States spotlights the need for more scalable training for mental health workers. While present-day AI systems have sparked hope for addressing this problem, we must not be too quick to incorporate or solely focus on technological advancements. We must ask empirical questions about how to ethically collaborate with and integrate autonomous AI into the clinical workplace. For these Human-Autonomy Teams (HATs), poised to make the leap into the mental health domain, special consideration around the construct of trust is in order. A reflexive look toward the multidisciplinary nature of such HAT projects illuminates the need for a deeper dive into varied stakeholder considerations of ethics and trust. In this paper, we investigate the impact of domain---and the ranges of expertise within domains---on ethics- and trust-related considerations for HATs in mental health. We outline our engagement of 23 participants in two speculative activities: design fiction and factorial survey vignettes. Grounded by a video storyboard prototype, AI- and Psychotherapy-domain experts and novices alike imagined TEAMMAIT, a prospective AI system for psychotherapy training. From our inductive analysis emerged 10 themes surrounding ethics, trust, and collaboration. Three can be seen as substantial barriers to trust and collaboration, where participants imagined they would not work with an AI teammate that didn't meet these ethical standards. Another five of the themes can be seen as interrelated, context-dependent, and variable factors of trust that impact collaboration with an AI teammate. The final two themes represent more explicit engagement with the prospective role of an AI teammate in psychotherapy training practices. We conclude by evaluating our findings through the lens of Mayer et al.'s Integrative Model of Organizational Trust to discuss the risks of HATs and adapt models of ability-, benevolence-, and integrity-based trust. These updates motivate implications for the design and integration of HATs in mental health work.more » « less
An official website of the United States government
