skip to main content


Title: Human-Centered Intelligent Training for Emergency Responders
Emergency response (ER) workers perform extremely demanding physical and cognitive tasks that can result in serious injuries and loss of life. Human augmentation technologies have the potential to enhance physical and cognitive work-capacities, thereby dramatically transforming the landscape of ER work, reducing injury risk, improving ER, as well as helping attract and retain skilled ER workers. This opportunity has been significantly hindered by the lack of high-quality training for ER workers that effectively integrates innovative and intelligent augmentation solutions. Hence, new ER learning environments are needed that are adaptive, affordable, accessible, and continually available for reskilling the ER workforce as technological capabilities continue to improve. This article presents the research considerations in the design and integration of use-inspired exoskeletons and augmented reality technologies in ER processes and the identification of unique cognitive and motor learning needs of each of these technologies in context-independent and ER-relevant scenarios. We propose a human-centered artificial intelligence (AI) enabled training framework for these technologies in ER. Finally, how these human-centered training requirements for nascent technologies are integrated in an intelligent tutoring system that delivers across tiered access levels, covering the range of virtual, to mixed, to physical reality environments, is discussed.  more » « less
Award ID(s):
2033592
NSF-PAR ID:
10342005
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
AI Magazine
Volume:
43
Issue:
1
ISSN:
0738-4602
Page Range / eLocation ID:
83 to 92
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Today’s classrooms are remarkably different from those of yesteryear. In place of individual students responding to the teacher from neat rows of desks, one more typically finds students working in groups on projects, with a teacher circulating among groups. AI applications in learning have been slow to catch up, with most available technologies focusing on personalizing or adapting instruction to learners as isolated individuals. Meanwhile, an established science of Computer Supported Collaborative Learning has come to prominence, with clear implications for how collaborative learning could best be supported. In this contribution, I will consider how intelligence augmentation could evolve to support collaborative learning as well as three signature challenges of this work that could drive AI forward. In conceptualizing collaborative learning, Kirschner and Erkens (2013) provide a useful 3x3 framework in which there are three aspects of learning (cognitive, social and motivational), three levels (community, group/team, and individual) and three kinds of pedagogical supports (discourse-oriented, representation-oriented, and process-oriented). As they engage in this multiply complex space, teachers and learners are both learning to collaborate and collaborating to learn. Further, questions of equity arise as we consider who is able to participate and in which ways. Overall, this analysis helps us see the complexity of today’s classrooms and within this complexity, the opportunities for augmentation or “assistance to become important and even essential. An overarching design concept has emerged in the past 5 years in response to this complexity, the idea of intelligent augmentation for “orchestrating” classrooms (Dillenbourg, et al, 2013). As a metaphor, orchestration can suggest the need for a coordinated performance among many agents who are each playing different roles or voicing different ideas. Practically speaking, orchestration suggests that “intelligence augmentation” could help many smaller things go well, and in doing so, could enable the overall intention of the learning experience to succeed. Those smaller things could include helping the teacher stay aware of students or groups who need attention, supporting formation of groups or transitions from one activity to the next, facilitating productive social interactions in groups, suggesting learning resources that would support teamwork, and more. A recent panel of AI experts identified orchestration as an overarching concept that is an important focus for near-term research and development for intelligence augmentation (Roschelle, Lester & Fusco, 2020). Tackling this challenging area of collaborative learning could also be beneficial for advancing AI technologies overall. Building AI agents that better understand the social context of human activities has broad importance, as does designing AI agents that can appropriately interact within teamwork. Collaborative learning has trajectory over time, and designing AI systems that support teams not just with a short term recommendation or suggestion but in long-term developmental processes is important. Further, classrooms that are engaged in collaborative learning could become very interesting hybrid environments, with multiple human and AI agents present at once and addressing dual outcome goals of learning to collaborate and collaborating to learn; addressing a hybrid environment like this could lead to developing AI systems that more robustly help many types of realistic human activity. In conclusion, the opportunity to make a societal impact by attending to collaborative learning, the availability of growing science of computer-supported collaborative learning and the need to push new boundaries in AI together suggest collaborative learning as a challenge worth tackling in coming years. 
    more » « less
  2. Training and on-site assistance is critical to help workers master required skills, improve worker productivity, and guarantee the product quality. Traditional training methods lack worker-centered considerations that are particularly in need when workers are facing ever changing demands. In this study, we propose a worker-centered training & assistant system for intelligent manufacturing, which is featured with self-awareness and active-guidance. Multi-modal sensing techniques are applied to perceive each individual worker and a deep learning approach is developed to understand the worker’s behavior and intention. Moreover, an object detection algorithm is implemented to identify the parts/tools the worker is interacting with. Then the worker’s current state is inferred and used for quantifying and assessing the worker performance, from which the worker’s potential guidance demands are analyzed. Furthermore, onsite guidance with multi-modal augmented reality is provided actively and continuously during the operational process. Two case studies are used to demonstrate the feasibility and great potential of our proposed approach and system for applying to the manufacturing industry for frontline workers. 
    more » « less
  3. The building industry has a major impact on the US economy and accounts for: $1 trillion in annual spending; 40% of the nation’s primary energy use; and 9 million jobs. Despite its massive impact, the industry has been criticized for poor productivity compared with other industries and billions of dollars in annual waste because of poor interoperability. Furthermore, the industry has been approaching a “labor cliff”: there are not enough new individuals entering the industry to offset the vacancies left by an aging, retiring workforce. To remain effective, this critical industry will need to do better with less. In order to prepare civil engineering students for careers in this industry, educators have aimed to replicate the processes associated with real-world projects through design/build educational activities like the Department of Energy’s (DOE) Solar Decathlon, Sacramento Municipal Utility District’s (SMUD) Tiny House Competition, and DOE’s Challenge Home Competition. These learning experiences help situate civil engineering concepts in an authentic learning environment. Unfortunately, not all universities have the financial resources necessary to fund this type of hands-on project. Technology has the potential to mitigate some of these inequities. Thus, the multi-faceted objective of this project is to: develop mixed reality (MR) technology aimed at sufficiently replicating physical design and construction learning environments to enable access to students at institutions without sufficient resources; and assess the impact of a MR-facilitated cyberlearning environment on promoting cognitive-, affective-, and skill-based learning that occurs during traditional (in-persona) design and construction activities. This research will explore a fundamental question: Can MR technology enable educators to simulate physical design and construction activities at low costs to enable students at all institutions to gain exposure to these types of hands-on learning environments? In order to address this question, we employ an iterative development approach according to Human Centered Design principles to support learning according to the Carnegie Foundation’s Three Apprenticeships Model (i.e., learning related to “Head”, “Hand”, and “Heart”). In order to achieve these aims, the research team uses MR technology (i.e., a Microsoft HoloLens®) to understand the extent to which this mode of education allows students to demonstrate knowledge similar to that which is gained through physical design and construction learning environments. This paper will presents highlights from the first year of this project. 
    more » « less
  4. The applicability of computational models to the biological world is an active topic of debate. We argue that a useful path forward results from abandoning hard boundaries between categories and adopting an observer-dependent, pragmatic view. Such a view dissolves the contingent dichotomies driven by human cognitive biases (e.g., a tendency to oversimplify) and prior technological limitations in favor of a more continuous view, necessitated by the study of evolution, developmental biology, and intelligent machines. Form and function are tightly entwined in nature, and in some cases, in robotics as well. Thus, efforts to re-shape living systems for biomedical or bioengineering purposes require prediction and control of their function at multiple scales. This is challenging for many reasons, one of which is that living systems perform multiple functions in the same place at the same time. We refer to this as “polycomputing”—the ability of the same substrate to simultaneously compute different things, and make those computational results available to different observers. This ability is an important way in which living things are a kind of computer, but not the familiar, linear, deterministic kind; rather, living things are computers in the broad sense of their computational materials, as reported in the rapidly growing physical computing literature. We argue that an observer-centered framework for the computations performed by evolved and designed systems will improve the understanding of mesoscale events, as it has already done at quantum and relativistic scales. To develop our understanding of how life performs polycomputing, and how it can be convinced to alter one or more of those functions, we can first create technologies that polycompute and learn how to alter their functions. Here, we review examples of biological and technological polycomputing, and develop the idea that the overloading of different functions on the same hardware is an important design principle that helps to understand and build both evolved and designed systems. Learning to hack existing polycomputing substrates, as well as to evolve and design new ones, will have massive impacts on regenerative medicine, robotics, and computer engineering. 
    more » « less
  5. While the building industry has a major impact on the US economy, it is one that is often criticized for poor productivity and waste resulted from interoperability. Additionally, the impending labor shortage requires that this is industry becomes one that can do more with less in order to remain effective. As part of preparing civil engineering students for careers in this industry and to design/build infrastructure that is responsive to changing societal needs, educators have aimed to replicate the processes associated with real-world projects through design/build educational activities (like the Department of Energy’s (DOE) Solar Decathlon, Sacramento Municipal Utility District’s (SMUD) Tiny House Competition, and DOE’s Challenge Home Competition) as part of helping students situate civil engineering concepts in an authentic learning environment. Unfortunately, not all universities have the financial resources necessary to fund these types of hands-on projects. Thankfully, technology has the potential to mitigate some of these inequities. This paper presents an update on a three-year NSF-funded project that aims to: develop mixed reality (MR) technology aimed at sufficiently replicating physical design and construction learning environments to enable access to students at institutions without sufficient resources; and assess the impact of a MR-facilitated cyberlearning environment on cognitive-, affective-, and skill-based learning that occurs during traditional (in-person) design and construction activities. Human Centered Design principles and the tenets of the Carnegie Foundation’s Three Apprenticeships Model (i.e., learning related to “Head”, “Hand”, and “Heart”) inform the design, development, and assessments in this project. Highlights from the first year and future plans will be discussed. 
    more » « less