Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Schmerl, Bradley R. ; Maggio, Martina ; Camara, Javier (Ed.)The MAPE-K feedback loop has been established as the primary reference model for self-adaptive and autonomous systems in domains such as autonomous driving, robotics, and Cyber-Physical Systems. At the same time, the Human Machine Teaming (HMT) paradigm is designed to promote partnerships between humans and autonomous machines. It goes far beyond the degree of collaboration expected in human-on-the-loop and human-in-the-loop systems and emphasizes interactions, partnership, and teamwork between humans and machines. However, while MAPE-K enables fully autonomous behavior, it does not explicitly address the interactions between humans and machines as intended by HMT. In this paper, we present the MAPE-K-HMT framework which augments the traditional MAPE-K loop with support for HMT. We identify critical human-machine teaming factors and describe the infrastructure needed across the various phases of the MAPE-K loop in order to effectively support HMT. This includes runtime models that are constructed and populated dynamically across monitoring, analysis, planning, and execution phases to support human-machine partnerships. We illustrate MAPE-KHMT using examples from an autonomous multi-UAV emergency response system, and present guidelines for integrating HMT into MAPE-K.
-
Rapid advancements in Artificial Intelligence have shifted the focus from traditional human-directed robots to fully autonomous ones that do not require explicit human control. These are commonly referred to as Human-on-the-Loop (HotL) systems. Transparency of HotL systems necessitates clear explanations of autonomous behavior so that humans are aware of what is happening in the environment and can understand why robots behave in a certain way. However, in complex multi-robot environments, especially those in which the robots are autonomous and mobile, humans may struggle to maintain situational awareness. Presenting humans with rich explanations of autonomous behavior tends to overload them with lots of information and negatively affect their understanding of the situation. Therefore, explaining the autonomous behavior of multiple robots creates a design tension that demands careful investigation. This paper examines the User Interface (UI) design trade-offs associated with providing timely and detailed explanations of autonomous behavior for swarms of small Unmanned Aerial Systems (sUAS) or drones. We analyze the impact of UI design choices on human awareness of the situation. We conducted multiple user studies with both inexperienced and expert sUAS operators to present our design solution and initial guidelines for designing the HotL multi-sUAS interface.
-
With the rise of new AI technologies, autonomous systems are moving towards a paradigm in which increasing levels of responsibility are shifted from the human to the system, creating a transition from human-in-the-loop systems to human-on-the-loop (HoTL) systems. This has a significant impact on the safety analysis of such systems, as new types of errors occurring at the boundaries of human-machine interactions need to be taken into consideration. Traditional safety analysis typically focuses on system-level hazards with little focus on user-related or user-induced hazards that can cause critical system failures. To address this issue, we construct domain-level safety analysis assets for sUAS (small unmanned aerial systems) applications and describe the process we followed to explicitly, and systematically identify Human Interaction Points (HiPs), Hazard Factors and Mitigations from system hazards. We evaluate our approach by first investigating the extent to which recent sUAS incidents are covered by our hazard trees, and second by performing a study with six domain experts using our hazard trees to identify and document hazards for sUAS usage scenarios. Our study showed that our hazard trees provided effective coverage for a wide variety of sUAS application scenarios and were useful for stimulating safety thinking and helping usersmore »
-
Computer vision approaches are widely used by autonomous robotic systems to sense the world around them and to guide their decision making as they perform diverse tasks such as collision avoidance, search and rescue, and object manipulation. High accuracy is critical, particularly for Human-on-the-loop (HoTL) systems where decisions are made autonomously by the system, and humans play only a supervisory role. Failures of the vision model can lead to erroneous decisions with potentially life or death consequences. In this paper, we propose a solution based upon adaptive autonomy levels, whereby the system detects loss of reliability of these models and responds by temporarily lowering its own autonomy levels and increasing engagement of the human in the decision-making process. Our solution is applicable for vision-based tasks in which humans have time to react and provide guidance. When implemented, our approach would estimate the reliability of the vision task by considering uncertainty in its model, and by performing covariate analysis to determine when the current operating environment is ill-matched to the model's training data. We provide examples from DroneResponse, in which small Unmanned Aerial Systems are deployed for Emergency Response missions, and show how the vision model's reliability would be used inmore »
-
Runtime monitoring is essential for ensuring the safe operation and enabling self-adaptive behavior of Cyber-Physical Systems (CPS). It requires the creation of system monitors, instrumentation for data collection, and the definition of constraints. All of these aspects need to evolve to accommodate changes in the system. However, most existing approaches lack support for the automated generation and setup of monitors and constraints for diverse technologies and do not provide adequate support for evolving the monitoring infrastructure. Without this support, constraints and monitors can become stale and become less effective in long-running, rapidly changing CPS. In this ``new and emerging results'' paper we propose a novel framework for model-integrated runtime monitoring. We combine model-driven techniques and runtime monitoring to automatically generate large parts of the monitoring framework and to reduce the maintenance effort necessary when parts of the monitored system change. We build a prototype and evaluate our approach against a system for controlling the flights of unmanned aerial vehicles.
-
Unmanned Aerial Vehicles (UAVs) are increasingly used by emergency responders to support search-and-rescue operations, medical supplies delivery, fire surveillance, and many other scenarios. At the same time, researchers are investigating usage scenarios in which UAVs are imbued with a greater level of autonomy to provide automated search, surveillance, and delivery capabilities that far exceed current adoption practices. To address this emergent opportunity, we are developing a configurable, multi-user, multi-UAV system for supporting the use of semi-autonomous UAVs in diverse emergency response missions. We present a requirements-driven approach for creating a software product line (SPL) of highly configurable scenarios based on different missions. We focus on the process for eliciting and modeling a family of related use cases, constructing individual feature models, and activity diagrams for each scenario, and then merging them into an SPL. We show how the SPL will be implemented through leveraging and augmenting existing features in our DroneResponse system. We further present a configuration tool, and demonstrate its ability to generate mission-specific configurations for 20 different use case scenarios.
-
The use of semi-autonomous Unmanned Aerial Vehicles (UAVs or drones) to support emergency response scenarios, such as fire surveillance and search-and-rescue, has the potential for huge societal benefits. Onboard sensors and artificial intelligence (AI) allow these UAVs to operate autonomously in the environment. However, human intelligence and domain expertise are crucial in planning and guiding UAVs to accomplish the mission. Therefore, humans and multiple UAVs need to collaborate as a team to conduct a time-critical mission successfully. We propose a meta-model to describe interactions among the human operators and the autonomous swarm of UAVs. The meta-model also provides a language to describe the roles of UAVs and humans and the autonomous decisions. We complement the meta-model with a template of requirements elicitation questions to derive models for specific missions. We also identify common scenarios where humans should collaborate with UAVs to augment the autonomy of the UAVs. We introduce the meta-model and the requirements elicitation process with examples drawn from a search-and-rescue mission in which multiple UAVs collaborate with humans to respond to the emergency. We then apply it to a second scenario in which UAVs support first responders in fighting a structural fire. Our results show that the meta-modelmore »
-
The use of semi-autonomous Unmanned Aerial Vehicles (UAV) to support emergency response scenarios, such as fire surveillance and search and rescue, offers the potential for huge societal benefits. However, designing an effective solution in this complex domain represents a ``wicked design'' problem, requiring a careful balance between trade-offs associated with drone autonomy versus human control, mission functionality versus safety, and the diverse needs of different stakeholders. This paper focuses on designing for situational awareness (SA) using a scenario-driven, participatory design process. We developed SA cards describing six common design-problems, known as SA demons, and three new demons of importance to our domain. We then used these SA cards to equip domain experts with SA knowledge so that they could more fully engage in the design process. We designed a potentially reusable solution for achieving SA in multi-stakeholder, multi-UAV, emergency response applications.