skip to main content


Title: Metahuman systems = humans + machines that learn
Metahuman systems are new, emergent, sociotechnical systems where machines that learn join human learning and create original systemic capabilities. Metahuman systems will change many facets of the way we think about organizations and work. They will push information systems research in new directions that may involve a revision of the field’s research goals, methods and theorizing. Information systems researchers can look beyond the capabilities and constraints of human learning toward hybrid human/machine learning systems that exhibit major differences in scale, scope and speed. We review how these changes influence organization design and goals. We identify four organizational level generic functions critical to organize metahuman systems properly: delegating, monitoring, cultivating, and reflecting. We show how each function raises new research questions for the field. We conclude by noting that improved understanding of metahuman systems will primarily come from learning-by-doing as information systems scholars try out new forms of hybrid learning in multiple settings to generate novel, generalizable, impactful designs. Such trials will result in improved understanding of metahuman systems. This need for large-scale experimentation will push many scholars out from their comfort zone, because it calls for the revitalization of action research programs that informed the first wave of socio-technical research at the dawn of automating work systems.  more » « less
Award ID(s):
1717473 1909803 1745463 1442840 1422066
NSF-PAR ID:
10171168
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Journal of Information Technology
ISSN:
0268-3962
Page Range / eLocation ID:
026839622091591
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Today’s classrooms are remarkably different from those of yesteryear. In place of individual students responding to the teacher from neat rows of desks, one more typically finds students working in groups on projects, with a teacher circulating among groups. AI applications in learning have been slow to catch up, with most available technologies focusing on personalizing or adapting instruction to learners as isolated individuals. Meanwhile, an established science of Computer Supported Collaborative Learning has come to prominence, with clear implications for how collaborative learning could best be supported. In this contribution, I will consider how intelligence augmentation could evolve to support collaborative learning as well as three signature challenges of this work that could drive AI forward. In conceptualizing collaborative learning, Kirschner and Erkens (2013) provide a useful 3x3 framework in which there are three aspects of learning (cognitive, social and motivational), three levels (community, group/team, and individual) and three kinds of pedagogical supports (discourse-oriented, representation-oriented, and process-oriented). As they engage in this multiply complex space, teachers and learners are both learning to collaborate and collaborating to learn. Further, questions of equity arise as we consider who is able to participate and in which ways. Overall, this analysis helps us see the complexity of today’s classrooms and within this complexity, the opportunities for augmentation or “assistance to become important and even essential. An overarching design concept has emerged in the past 5 years in response to this complexity, the idea of intelligent augmentation for “orchestrating” classrooms (Dillenbourg, et al, 2013). As a metaphor, orchestration can suggest the need for a coordinated performance among many agents who are each playing different roles or voicing different ideas. Practically speaking, orchestration suggests that “intelligence augmentation” could help many smaller things go well, and in doing so, could enable the overall intention of the learning experience to succeed. Those smaller things could include helping the teacher stay aware of students or groups who need attention, supporting formation of groups or transitions from one activity to the next, facilitating productive social interactions in groups, suggesting learning resources that would support teamwork, and more. A recent panel of AI experts identified orchestration as an overarching concept that is an important focus for near-term research and development for intelligence augmentation (Roschelle, Lester & Fusco, 2020). Tackling this challenging area of collaborative learning could also be beneficial for advancing AI technologies overall. Building AI agents that better understand the social context of human activities has broad importance, as does designing AI agents that can appropriately interact within teamwork. Collaborative learning has trajectory over time, and designing AI systems that support teams not just with a short term recommendation or suggestion but in long-term developmental processes is important. Further, classrooms that are engaged in collaborative learning could become very interesting hybrid environments, with multiple human and AI agents present at once and addressing dual outcome goals of learning to collaborate and collaborating to learn; addressing a hybrid environment like this could lead to developing AI systems that more robustly help many types of realistic human activity. In conclusion, the opportunity to make a societal impact by attending to collaborative learning, the availability of growing science of computer-supported collaborative learning and the need to push new boundaries in AI together suggest collaborative learning as a challenge worth tackling in coming years. 
    more » « less
  2. The brain is arguably the most powerful computation system known. It is extremely efficient in processing large amounts of information and can discern signals from noise, adapt, and filter faulty information all while running on only 20 watts of power. The human brain's processing efficiency, progressive learning, and plasticity are unmatched by any computer system. Recent advances in stem cell technology have elevated the field of cell culture to higher levels of complexity, such as the development of three-dimensional (3D) brain organoids that recapitulate human brain functionality better than traditional monolayer cell systems. Organoid Intelligence (OI) aims to harness the innate biological capabilities of brain organoids for biocomputing and synthetic intelligence by interfacing them with computer technology. With the latest strides in stem cell technology, bioengineering, and machine learning, we can explore the ability of brain organoids to compute, and store given information (input), execute a task (output), and study how this affects the structural and functional connections in the organoids themselves. Furthermore, understanding how learning generates and changes patterns of connectivity in organoids can shed light on the early stages of cognition in the human brain. Investigating and understanding these concepts is an enormous, multidisciplinary endeavor that necessitates the engagement of both the scientific community and the public. Thus, on Feb 22–24 of 2022, the Johns Hopkins University held the first Organoid Intelligence Workshop to form an OI Community and to lay out the groundwork for the establishment of OI as a new scientific discipline. The potential of OI to revolutionize computing, neurological research, and drug development was discussed, along with a vision and roadmap for its development over the coming decade.

     
    more » « less
  3. Who and by what means do we ensure that engineering education evolves to meet the ever changing needs of our society? This and other papers presented by our research team at this conference offer our initial set of findings from an NSF sponsored collaborative study on engineering education reform. Organized around the notion of higher education governance and the practice of educational reform, our open-ended study is based on conducting semi-structured interviews at over three dozen universities and engineering professional societies and organizations, along with a handful of scholars engaged in engineering education research. Organized as a multi-site, multi-scale study, our goal is to document differences in perspectives and interest the exist across organizational levels and institutions, and to describe the coordination that occurs (or fails to occur) in engineering education given the distributed structure of the engineering profession. This paper offers for all engineering educators and administrators a qualitative and retrospective analysis of ABET EC 2000 and its implementation. The paper opens with a historical background on the Engineers Council for Professional Development (ECPD) and engineering accreditation; the rise of quantitative standards during the 1950s as a result of the push to implement an engineering science curriculum appropriate to the Cold War era; EC 2000 and its call for greater emphasis on professional skill sets amidst concerns about US manufacturing productivity and national competitiveness; the development of outcomes assessment and its implementation; and the successive negotiations about assessment practice and the training of both of program evaluators and assessment coordinators for the degree programs undergoing evaluation. It was these negotiations and the evolving practice of assessment that resulted in the latest set of changes in ABET engineering accreditation criteria (“1-7” versus “a-k”). To provide an insight into the origins of EC 2000, the “Gang of Six,” consisting of a group of individuals loyal to ABET who used the pressure exerted by external organizations, along with a shared rhetoric of national competitiveness to forge a common vision organized around the expanded emphasis on professional skill sets. It was also significant that the Gang of Six was aware of the fact that the regional accreditation agencies were already contemplating a shift towards outcomes assessment; several also had a background in industrial engineering. However, this resulted in an assessment protocol for EC 2000 that remained ambiguous about whether the stated learning outcomes (Criterion 3) was something faculty had to demonstrate for all of their students, or whether EC 2000’s main emphasis was continuous improvement. When it proved difficult to demonstrate learning outcomes on the part of all students, ABET itself began to place greater emphasis on total quality management and continuous process improvement (TQM/CPI). This gave institutions an opening to begin using increasingly limited and proximate measures for the “a-k” student outcomes as evidence of effort and improvement. In what social scientific terms would be described as “tactical” resistance to perceived oppressive structures, this enabled ABET coordinators and the faculty in charge of degree programs, many of whom had their own internal improvement processes, to begin referring to the a-k criteria as “difficult to achieve” and “ambiguous,” which they sometimes were. Inconsistencies in evaluation outcomes enabled those most discontented with the a-k student outcomes to use ABET’s own organizational processes to drive the latest revisions to EAC accreditation criteria, although the organization’s own process for member and stakeholder input ultimately restored much of the professional skill sets found in the original EC 2000 criteria. Other refinements were also made to the standard, including a new emphasis on diversity. This said, many within our interview population believe that EC 2000 had already achieved much of the changes it set out to achieve, especially with regards to broader professional skills such as communication, teamwork, and design. Regular faculty review of curricula is now also a more routine part of the engineering education landscape. While programs vary in their engagement with ABET, there are many who are skeptical about whether the new criteria will produce further improvements to their programs, with many arguing that their own internal processes are now the primary drivers for change. 
    more » « less
  4. International collaboration between collections, aggregators, and researchers within the biodiversity community and beyond is becoming increasingly important in our efforts to support biodiversity, conservation and the life of the planet. The social, technical, logistical and financial aspects of an equitable biodiversity data landscape – from workforce training and mobilization of linked specimen data, to data integration, use and publication – must be considered globally and within the context of a growing biodiversity crisis. In recent years, several initiatives have outlined paths forward that describe how digital versions of natural history specimens can be extended and linked with associated data. In the United States, Webster (2017) presented the “extended specimen”, which was expanded upon by Lendemer et al. (2019) through the work of the Biodiversity Collections Network (BCoN). At the same time, a “digital specimen” concept was developed by DiSSCo in Europe (Hardisty 2020). Both the extended and digital specimen concepts depict a digital proxy of an analog natural history specimen, whose digital nature provides greater capabilities such as being machine-processable, linkages with associated data, globally accessible information-rich biodiversity data, improved tracking, attribution and annotation, additional opportunities for data use and cross-disciplinary collaborations forming the basis for FAIR (Findable, Accessible, Interoperable, Reproducible) and equitable sharing of benefits worldwide, and innumerable other advantages, with slight variation in how an extended or digital specimen model would be executed. Recognizing the need to align the two closely-related concepts, and to provide a place for open discussion around various topics of the Digital Extended Specimen (DES; the current working name for the joined concepts), we initiated a virtual consultation on the discourse platform hosted by the Alliance for Biodiversity Knowledge through GBIF. This platform provided a forum for threaded discussions around topics related and relevant to the DES. The goals of the consultation align with the goals of the Alliance for Biodiversity Knowledge: expand participation in the process, build support for further collaboration, identify use cases, identify significant challenges and obstacles, and develop a comprehensive roadmap towards achieving the vision for a global specification for data integration. In early 2021, Phase 1 launched with five topics: Making FAIR data for specimens accessible; Extending, enriching and integrating data; Annotating specimens and other data; Data attribution; and Analyzing/mining specimen data for novel applications. This round of full discussion was productive and engaged dozens of contributors, with hundreds of posts and thousands of views. During Phase 1, several deeper, more technical, or additional topics of relevance were identified and formed the foundation for Phase 2 which began in May 2021 with the following topics: Robust access points and data infrastructure alignment; Persistent identifier (PID) scheme(s); Meeting legal/regulatory, ethical and sensitive data obligations; Workforce capacity development and inclusivity; Transactional mechanisms and provenance; and Partnerships to collaborate more effectively. In Phase 2 fruitful progress was made towards solutions to some of these complex functional and technical long-term goals. Simultaneously, our commitment to open participation was reinforced, through increased efforts to involve new voices from allied and complementary fields. Among a wealth of ideas expressed, the community highlighted the need for unambiguous persistent identifiers and a dedicated agent to assign them, support for a fully linked system that includes robust publishing mechanisms, strong support for social structures that build trustworthiness of the system, appropriate attribution of legacy and new work, a system that is inclusive, removed from colonial practices, and supportive of creative use of biodiversity data, building a truly global data infrastructure, balancing open access with legal obligations and ethical responsibilities, and the partnerships necessary for success. These two consultation periods, and the myriad activities surrounding the online discussion, produced a wide variety of perspectives, strategies, and approaches to converging the digital and extended specimen concepts, and progressing plans for the DES -- steps necessary to improve access to research-ready data to advance our understanding of the diversity and distribution of life. Discussions continue and we hope to include your contributions to the DES in future implementation plans. 
    more » « less
  5. null (Ed.)
    In 2016, 10 universities launched a Networked Improvement Community (NIC) aimed at increasing the number of scholars from Alliances for Graduate Education and the Professoriate (AGEP) populations entering science, technology, engineering, and mathematics (STEM) faculty careers. NICs bring together stakeholders focused on a common goal to accelerate innovation through structured, ongoing intervention development, implementation, and refinement. We theorized a NIC organizational structure would aid understandings of a complex problem in different contexts and accelerate opportunities to develop and improve interventions to address the problem. A distinctive feature of this NIC is its diverse institutional composition of public and private, predominantly white institutions, a historically Black university, a Hispanic-serving institution, and land grant institutions located across eight states and Washington, DC, United States. NIC members hold different positions within their institutions and have access to varied levers of change. Among the many lessons learned through this community case study, analyzing and addressing failed strategies is as equally important to a healthy NIC as is sharing learning from successful interventions. We initially relied on pre-existing relationships and assumptions about how we would work together, rather than making explicit how the NIC would develop, establish norms, understand common processes, and manage changing relationships. We had varied understandings of the depth of campus differences, sometimes resulting in frustrations about the disparate progress on goals. NIC structures require significant engagement with the group, often more intensive than traditional multi-institution organizational structures. They require time to develop and ongoing maintenance in order to advance the work. We continue to reevaluate our model for leadership, climate, diversity, conflict resolution, engagement, decision-making, roles, and data, leading to increased investment in the success of all NIC institutions. Our NIC has evolved from the traditional NIC model to become the Center for the Integration of Research, Teaching and Learning (CIRTL) AGEP NIC model with five key characteristics: (1) A well-specified aim, (2) An understanding of systems, including a variety of contexts and different organizations, (3) A culture and practice of shared leadership and inclusivity, (4) The use of data reflecting different institutional contexts, and (5) The ability to accelerate infrastructure and interventions. We conclude with recommendations for those considering developing a NIC to promote diversity, equity, and inclusion efforts. 
    more » « less