skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Humans and Robots in Off-Normal Applications and Emergencie
Unmanned systems are becoming increasingly engaged in disaster response. Human error in these applications can have severe consequences and emergency managers appear reluctant to adopt robots. This paper presents a taxonomy of normal and off-normal scenarios that, when combined with a model of impacts on cognitive and attentional resources, specify sources of human error in field robotics. In an emergency, a human is under time and consequences pressure, regardless of whether the mission is routine or whether the event requires a change in the robot, the mission, the robot’s work envelope, the interaction of the humans engaged with the robot, or their work envelope. For example, at Hurricane Michael, unmanned aerial systems were used for standard visual survey missions with minor human errors but the same systems were used at the Kilauea volcanic eruption for novel missions with more notable human errors. An examination of two cases studies suggests the physiological and psychological effects of an emergency may be the primary source of human error.  more » « less
Award ID(s):
1840873
PAR ID:
10107643
Author(s) / Creator(s):
Date Published:
Journal Name:
Advances in Human Factors in Robots and Unmanned Systems. AHFE 2019
Volume:
962
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The use of semi-autonomous Unmanned Aerial Vehicles (UAVs or drones) to support emergency response scenarios, such as fire surveillance and search-and-rescue, has the potential for huge societal benefits. Onboard sensors and artificial intelligence (AI) allow these UAVs to operate autonomously in the environment. However, human intelligence and domain expertise are crucial in planning and guiding UAVs to accomplish the mission. Therefore, humans and multiple UAVs need to collaborate as a team to conduct a time-critical mission successfully. We propose a meta-model to describe interactions among the human operators and the autonomous swarm of UAVs. The meta-model also provides a language to describe the roles of UAVs and humans and the autonomous decisions. We complement the meta-model with a template of requirements elicitation questions to derive models for specific missions. We also identify common scenarios where humans should collaborate with UAVs to augment the autonomy of the UAVs. We introduce the meta-model and the requirements elicitation process with examples drawn from a search-and-rescue mission in which multiple UAVs collaborate with humans to respond to the emergency. We then apply it to a second scenario in which UAVs support first responders in fighting a structural fire. Our results show that the meta-model and the template of questions support the modeling of the human-on-the-loop human interactions for these complex missions, suggesting that it is a useful tool for modeling the human-on-the-loop interactions for multi-UAVs missions. 
    more » « less
  2. In emergency response scenarios, autonomous small Unmanned Aerial Systems (sUAS) must be configured and deployed quickly and safely to perform mission-specific tasks. In this paper, we present \DR, a Software Product Line for rapidly configuring and deploying a multi-role, multi-sUAS mission whilst guaranteeing a set of safety properties related to the sequencing of tasks within the mission. Individual sUAS behavior is governed by an onboard state machine, combined with coordination handlers which are configured dynamically within seconds of launch and ultimately determine the sUAS' behaviors, transition decisions, and interactions with other sUAS, as well as human operators. The just-in-time manner in which missions are configured precludes robust upfront testing of all conceivable combinations of features -- both within individual sUAS and across cohorts of collaborating ones. To ensure the absence of common types of configuration failures and to promote safe deployments, we check vital properties of the dynamically generated sUAS specifications and coordination handlers before sUAS are assigned their missions. We evaluate our approach in two ways. First, we perform validation tests to show that the end-to-end configuration process results in correctly executed missions, and second, we apply fault-based mutation testing to show that our safety checks successfully detect incorrect task sequences. 
    more » « less
  3. null (Ed.)
    Unmanned Aerial Vehicles (UAVs) are increasingly used by emergency responders to support search-and-rescue operations, medical supplies delivery, fire surveillance, and many other scenarios. At the same time, researchers are investigating usage scenarios in which UAVs are imbued with a greater level of autonomy to provide automated search, surveillance, and delivery capabilities that far exceed current adoption practices. To address this emergent opportunity, we are developing a configurable, multi-user, multi-UAV system for supporting the use of semi-autonomous UAVs in diverse emergency response missions. We present a requirements-driven approach for creating a software product line (SPL) of highly configurable scenarios based on different missions. We focus on the process for eliciting and modeling a family of related use cases, constructing individual feature models, and activity diagrams for each scenario, and then merging them into an SPL. We show how the SPL will be implemented through leveraging and augmenting existing features in our DroneResponse system. We further present a configuration tool, and demonstrate its ability to generate mission-specific configurations for 20 different use case scenarios. 
    more » « less
  4. Human-robot interaction (HRI) studies have found people overtrust robots in domestic settings, even when the robot exhibits faulty behavior. Cognitive dissonance and selective attention explain these results. To test these theories, a novel HRI study was performed in a university library where participants were recruited to follow a package delivery robot. Participants then faced a dilemma to deliver a package in a private common room that might be off-limits. Then, they faced another dilemma when the robot stopped in front of an Emergency Exit door, and they had to trust the robot whether to open it or not Results showed individuals did not overtrust the robot and open the Emergency Exit door. Interestingly, most individuals demurred from entering the private common room when packages were not labeled, whereas groups of friends were more likely to enter the room. Then, selective attention was demonstrated by stopping participants in front of a similar Emergency Exit door and assessing whether they noticed it In one condition, only half of participants noticed it, and when the robot became more engaging no one noticed it. Additionally, a malfunctioning robot is exhibited, showing what kind of negative outcome was required to reduce trust. 
    more » « less
  5. null (Ed.)
    Computer vision approaches are widely used by autonomous robotic systems to sense the world around them and to guide their decision making as they perform diverse tasks such as collision avoidance, search and rescue, and object manipulation. High accuracy is critical, particularly for Human-on-the-loop (HoTL) systems where decisions are made autonomously by the system, and humans play only a supervisory role. Failures of the vision model can lead to erroneous decisions with potentially life or death consequences. In this paper, we propose a solution based upon adaptive autonomy levels, whereby the system detects loss of reliability of these models and responds by temporarily lowering its own autonomy levels and increasing engagement of the human in the decision-making process. Our solution is applicable for vision-based tasks in which humans have time to react and provide guidance. When implemented, our approach would estimate the reliability of the vision task by considering uncertainty in its model, and by performing covariate analysis to determine when the current operating environment is ill-matched to the model's training data. We provide examples from DroneResponse, in which small Unmanned Aerial Systems are deployed for Emergency Response missions, and show how the vision model's reliability would be used in addition to confidence scores to drive and specify the behavior and adaptation of the system's autonomy. This workshop paper outlines our proposed approach and describes open challenges at the intersection of Computer Vision and Software Engineering for the safe and reliable deployment of vision models in the decision making of autonomous systems. 
    more » « less