skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on July 4, 2026

Title: Rethinking How We Theorize AI in Organization and Management: A Problematizing Review of Rationality and Anthropomorphism
Abstract Artificial intelligence (AI) has long held the promise of imitating, replacing, or even surpassing human intelligence. Now that the abilities of AI systems have started to approach this initial aspiration, organization and management scholars face a challenge in how to theorize this technology, which potentially changes the way we view technology: not as a tool, but as something that enters previously human‐only domains. To navigate this theorizing challenge, we adopt the problematizing review method by engaging in a selective and critical reading of the theoretical contributions regarding AI, in the most influential organization and management journals. We examine how the literature has grounded itself with AI as the root metaphor and what field assumptions about AI are shared – or contested – in the field. We uncover two core assumptions of rationality and anthropomorphism, around which fruitful debates are already emerging. We discuss these two assumptions and their organizational boundary conditions in the context of theorizing AI. Finally, we invite scholars to build distinctive organization and management theory scaffolding within the broader social science of AI.  more » « less
Award ID(s):
2211943
PAR ID:
10628112
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Society for the Advancement of Management Studies and John Wiley & Sons Ltd.
Date Published:
Journal Name:
Journal of Management Studies
ISSN:
0022-2380
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The nexus between technology and workplace inequality has been a long-standing topic of scholarly interest, now heightened by the rapid evolution of artificial intelligence (AI). Our review moves beyond dystopian or utopian views of AI by identifying four perspectives—normative, cognitive, structural, and relational—espoused by scholars examining the impact of AI on workplace inequality specifically, and the structure and organization of work more broadly. We discuss the respective strengths, limitations, and underlying assumptions of these perspectives and highlight how each perspective speaks to a particular facet of workplace inequality: either encoded, evaluative, wage, or relational inequality. Integrating these perspectives enables a deeper understanding of the mechanisms, processes, and trajectories through which AI influences workplace inequality, as well as the role that organizational managers, workers, and policymakers could play in the process. Toward this end, we introduce a framework on the “inequality cascades” of AI that traces how and when inequality emerges and amplifies cumulatively as AI systems progress through the phases of development, implementation, and use in organizations. In turn, we articulate a research agenda for management and organizational scholars to better understand AI and its multifaceted impact on workplace inequality, and we examine potential mechanisms to mitigate its adverse consequences. 
    more » « less
  2. Abstract Artificial intelligence (AI) methods have revolutionized and redefined the landscape of data analysis in business, healthcare, and technology. These methods have innovated the applied mathematics, computer science, and engineering fields and are showing considerable potential for risk science, especially in the disaster risk domain. The disaster risk field has yet to define itself as a necessary application domain for AI implementation by defining how to responsibly balance AI and disaster risk. (1) How is AI being used for disaster risk applications; and how are these applications addressing the principles and assumptions of risk science, (2) What are the benefits of AI being used for risk applications; and what are the benefits of applying risk principles and assumptions for AI‐based applications, (3) What are the synergies between AI and risk science applications, and (4) What are the characteristics of effective use of fundamental risk principles and assumptions for AI‐based applications? This study develops and disseminates an online survey questionnaire that leverages expertise from risk and AI professionals to identify the most important characteristics related to AI and risk, then presents a framework for gauging how AI and disaster risk can be balanced. This study is the first to develop a classification system for applying risk principles for AI‐based applications. This classification contributes to understanding of AI and risk by exploring how AI can be used to manage risk, how AI methods introduce new or additional risk, and whether fundamental risk principles and assumptions are sufficient for AI‐based applications. 
    more » « less
  3. Concerns about the risks and harms posed by artificial intelligence (AI) have resulted in significant study into algorithmic transparency, giving rise to a sub-field known as Explainable AI (XAI). Unfortunately, despite a decade of development in XAI, an existential challenge remains: progress in research has not been fully translated into the actual implementation of algorithmic transparency by organizations. In this work, we test an approach for addressing the challenge by creating transparency advocates, or motivated individuals within organizations who drive a ground-up cultural shift towards improved algorithmic transparency.Over several years, we created an open-source educational workshop on algorithmic transparency and advocacy. We delivered the workshop to professionals across two separate domains to improve their algorithmic transparency literacy and willingness to advocate for change. In the weeks following the workshop, participants applied what they learned, such as speaking up for algorithmic transparency at an organization-wide AI strategy meeting. We also make two broader observations: first, advocacy is not a monolith and can be broken down into different levels. Second, individuals' willingness for advocacy is affected by their professional field. For example, news and media professionals may be more likely to advocate for algorithmic transparency than those working at technology start-ups. 
    more » « less
  4. The field of artificial consciousness (AC) has largely developed outside of mainstream artificial intelligence (AI), with separate goals and criteria for success and with only a minimal exchange of ideas. This is unfortunate as the two fields appear to be synergistic. For example, here we consider the question of how concepts developed in AC research might contribute to more effective future AI systems. We first briefly discuss several past hypotheses about the function(s) of human consciousness, and present our own hypothesis that short-term working memory and very rapid learning should be a central concern in such matters. In this context, we then present ideas about how integrating concepts from AC into AI systems to develop an artificial conscious intelligence (ACI) could both produce more effective AI technology and contribute to a deeper scientific understanding of the fundamental nature of consciousness and intelligence. 
    more » « less
  5. AbstractRecent advances in generative artificial intelligence (AI) and multimodal learning analytics (MMLA) have allowed for new and creative ways of leveraging AI to support K12 students' collaborative learning in STEM+C domains. To date, there is little evidence of AI methods supporting students' collaboration in complex, open‐ended environments. AI systems are known to underperform humans in (1) interpreting students' emotions in learning contexts, (2) grasping the nuances of social interactions and (3) understanding domain‐specific information that was not well‐represented in the training data. As such, combined human and AI (ie, hybrid) approaches are needed to overcome the current limitations of AI systems. In this paper, we take a first step towards investigating how a human‐AI collaboration between teachers and researchers using an AI‐generated multimodal timeline can guide and support teachers' feedback while addressing students' STEM+C difficulties as they work collaboratively to build computational models and solve problems. In doing so, we present a framework characterizing the human component of our human‐AI partnership as a collaboration between teachers and researchers. To evaluate our approach, we present our timeline to a high school teacher and discuss the key insights gleaned from our discussions. Our case study analysis reveals the effectiveness of an iterative approach to using human‐AI collaboration to address students' STEM+C challenges: the teacher can use the AI‐generated timeline to guide formative feedback for students, and the researchers can leverage the teacher's feedback to help improve the multimodal timeline. Additionally, we characterize our findings with respect to two events of interest to the teacher: (1) when the students cross adifficulty threshold,and (2) thepoint of intervention, that is, when the teacher (or system) should intervene to provide effective feedback. It is important to note that the teacher explained that there should be a lag between (1) and (2) to give students a chance to resolve their own difficulties. Typically, such a lag is not implemented in computer‐based learning environments that provide feedback. Practitioner notesWhat is already known about this topicCollaborative, open‐ended learning environments enhance students' STEM+C conceptual understanding and practice, but they introduce additional complexities when students learn concepts spanning multiple domains.Recent advances in generative AI and MMLA allow for integrating multiple datastreams to derive holistic views of students' states, which can support more informed feedback mechanisms to address students' difficulties in complex STEM+C environments.Hybrid human‐AI approaches can help address collaborating students' STEM+C difficulties by combining the domain knowledge, emotional intelligence and social awareness of human experts with the general knowledge and efficiency of AI.What this paper addsWe extend a previous human‐AI collaboration framework using a hybrid intelligence approach to characterize the human component of the partnership as a researcher‐teacher partnership and present our approach as a teacher‐researcher‐AI collaboration.We adapt an AI‐generated multimodal timeline to actualize our human‐AI collaboration by pairing the timeline with videos of students encountering difficulties, engaging in active discussions with a high school teacher while watching the videos to discern the timeline's utility in the classroom.From our discussions with the teacher, we define two types ofinflection pointsto address students' STEM+C difficulties—thedifficulty thresholdand theintervention point—and discuss how thefeedback latency intervalseparating them can inform educator interventions.We discuss two ways in which our teacher‐researcher‐AI collaboration can help teachers support students encountering STEM+C difficulties: (1) teachers using the multimodal timeline to guide feedback for students, and (2) researchers using teachers' input to iteratively refine the multimodal timeline.Implications for practice and/or policyOur case study suggests that timeline gaps (ie, disengaged behaviour identified by off‐screen students, pauses in discourse and lulls in environment actions) are particularly important for identifying inflection points and formulating formative feedback.Human‐AI collaboration exists on a dynamic spectrum and requires varying degrees of human control and AI automation depending on the context of the learning task and students' work in the environment.Our analysis of this human‐AI collaboration using a multimodal timeline can be extended in the future to support students and teachers in additional ways, for example, designing pedagogical agents that interact directly with students, developing intervention and reflection tools for teachers, helping teachers craft daily lesson plans and aiding teachers and administrators in designing curricula. 
    more » « less