The nexus between technology and workplace inequality has been a long-standing topic of scholarly interest, now heightened by the rapid evolution of artificial intelligence (AI). Our review moves beyond dystopian or utopian views of AI by identifying four perspectives—normative, cognitive, structural, and relational—espoused by scholars examining the impact of AI on workplace inequality specifically, and the structure and organization of work more broadly. We discuss the respective strengths, limitations, and underlying assumptions of these perspectives and highlight how each perspective speaks to a particular facet of workplace inequality: either encoded, evaluative, wage, or relational inequality. Integrating these perspectives enables a deeper understanding of the mechanisms, processes, and trajectories through which AI influences workplace inequality, as well as the role that organizational managers, workers, and policymakers could play in the process. Toward this end, we introduce a framework on the “inequality cascades” of AI that traces how and when inequality emerges and amplifies cumulatively as AI systems progress through the phases of development, implementation, and use in organizations. In turn, we articulate a research agenda for management and organizational scholars to better understand AI and its multifaceted impact on workplace inequality, and we examine potential mechanisms to mitigate its adverse consequences.
more »
« less
This content will become publicly available on July 4, 2026
Rethinking How We Theorize AI in Organization and Management: A Problematizing Review of Rationality and Anthropomorphism
Abstract Artificial intelligence (AI) has long held the promise of imitating, replacing, or even surpassing human intelligence. Now that the abilities of AI systems have started to approach this initial aspiration, organization and management scholars face a challenge in how to theorize this technology, which potentially changes the way we view technology: not as a tool, but as something that enters previously human‐only domains. To navigate this theorizing challenge, we adopt the problematizing review method by engaging in a selective and critical reading of the theoretical contributions regarding AI, in the most influential organization and management journals. We examine how the literature has grounded itself with AI as the root metaphor and what field assumptions about AI are shared – or contested – in the field. We uncover two core assumptions of rationality and anthropomorphism, around which fruitful debates are already emerging. We discuss these two assumptions and their organizational boundary conditions in the context of theorizing AI. Finally, we invite scholars to build distinctive organization and management theory scaffolding within the broader social science of AI.
more »
« less
- Award ID(s):
- 2211943
- PAR ID:
- 10628112
- Publisher / Repository:
- Society for the Advancement of Management Studies and John Wiley & Sons Ltd.
- Date Published:
- Journal Name:
- Journal of Management Studies
- ISSN:
- 0022-2380
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Artificial intelligence (AI) methods have revolutionized and redefined the landscape of data analysis in business, healthcare, and technology. These methods have innovated the applied mathematics, computer science, and engineering fields and are showing considerable potential for risk science, especially in the disaster risk domain. The disaster risk field has yet to define itself as a necessary application domain for AI implementation by defining how to responsibly balance AI and disaster risk. (1) How is AI being used for disaster risk applications; and how are these applications addressing the principles and assumptions of risk science, (2) What are the benefits of AI being used for risk applications; and what are the benefits of applying risk principles and assumptions for AI‐based applications, (3) What are the synergies between AI and risk science applications, and (4) What are the characteristics of effective use of fundamental risk principles and assumptions for AI‐based applications? This study develops and disseminates an online survey questionnaire that leverages expertise from risk and AI professionals to identify the most important characteristics related to AI and risk, then presents a framework for gauging how AI and disaster risk can be balanced. This study is the first to develop a classification system for applying risk principles for AI‐based applications. This classification contributes to understanding of AI and risk by exploring how AI can be used to manage risk, how AI methods introduce new or additional risk, and whether fundamental risk principles and assumptions are sufficient for AI‐based applications.more » « less
-
Concerns about the risks and harms posed by artificial intelligence (AI) have resulted in significant study into algorithmic transparency, giving rise to a sub-field known as Explainable AI (XAI). Unfortunately, despite a decade of development in XAI, an existential challenge remains: progress in research has not been fully translated into the actual implementation of algorithmic transparency by organizations. In this work, we test an approach for addressing the challenge by creating transparency advocates, or motivated individuals within organizations who drive a ground-up cultural shift towards improved algorithmic transparency.Over several years, we created an open-source educational workshop on algorithmic transparency and advocacy. We delivered the workshop to professionals across two separate domains to improve their algorithmic transparency literacy and willingness to advocate for change. In the weeks following the workshop, participants applied what they learned, such as speaking up for algorithmic transparency at an organization-wide AI strategy meeting. We also make two broader observations: first, advocacy is not a monolith and can be broken down into different levels. Second, individuals' willingness for advocacy is affected by their professional field. For example, news and media professionals may be more likely to advocate for algorithmic transparency than those working at technology start-ups.more » « less
-
Concerns about the risks and harms posed by artificial intelligence (AI) have resulted in significant study into algorithmic transparency, giving rise to a sub-field known as Explainable AI (XAI). Unfortunately, despite a decade of development in XAI, an existential challenge remains: progress in research has not been fully translated into the actual implementation of algorithmic transparency by organizations. In this work, we test an approach for addressing the challenge by creating transparency advocates, or motivated individuals within organizations who drive a ground-up cultural shift towards improved algorithmic transparency. Over several years, we created an open-source educational workshop on algorithmic transparency and advocacy. We delivered the workshop to professionals across two separate domains to improve their algorithmic transparency literacy and willingness to advocate for change. In the weeks following the workshop, participants applied what they learned, such as speaking up for algorithmic transparency at an organization-wide AI strategy meeting. We also make two broader observations: first, advocacy is not a monolith and can be broken down into different levels. Second, individuals' willingness for advocacy is affected by their professional field. For example, news and media professionals may be more likely to advocate for algorithmic transparency than those working at technology start-ups.more » « less
-
The field of artificial consciousness (AC) has largely developed outside of mainstream artificial intelligence (AI), with separate goals and criteria for success and with only a minimal exchange of ideas. This is unfortunate as the two fields appear to be synergistic. For example, here we consider the question of how concepts developed in AC research might contribute to more effective future AI systems. We first briefly discuss several past hypotheses about the function(s) of human consciousness, and present our own hypothesis that short-term working memory and very rapid learning should be a central concern in such matters. In this context, we then present ideas about how integrating concepts from AC into AI systems to develop an artificial conscious intelligence (ACI) could both produce more effective AI technology and contribute to a deeper scientific understanding of the fundamental nature of consciousness and intelligence.more » « less
An official website of the United States government
