skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Framework for Analyzing Human Cognition in Operationally-Relevant Human Swarm Interaction
Abstract For a wide variety of envisioned humanitarian and commercial applications that involve a human user commanding a swarm of robotic systems, developing human-swarm interaction (HSI) principles and interfaces calls for systematic virtual environments to study such HSI implementations. Specifically, such studies are fundamental to achieving HSI that is operationally efficient and can facilitate trust calibration through the collection-use-modeling of cognitive information. However, there is a lack of such virtual environments, especially in the context of studying HSI in different operationally relevant contexts. Building on our previous work in swarm simulation and computer game-based HSI, this paper develops a comprehensive virtual environment to study HSI under varying swarm size, swarm compliance, and swarm-to-human feedback. This paper demonstrates how this simulation environment informs the development of an indoor physical (experimentation) environment to evaluate the human cognitive model. New approaches are presented to simulate physical assets based on physical experiment-based calibration and the effects that this presents on the human users. Key features of the simulation environment include medium fidelity simulation of large teams of small aerial and ground vehicles (based on the Pybullet engine), a graphical user interface to receive human command and provide feedback (from swarm assets) to human in the case of non-compliance with commands, and a lab-streaming layer to synchronize physiological data collection (e.g., related to brain activity and eye gaze) with swarm state and human commands.  more » « less
Award ID(s):
2048020 1927462
PAR ID:
10427481
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
American Society of Mechanical Engineers
Date Published:
ISBN:
978-0-7918-8729-5
Format(s):
Medium: X
Location:
Boston, Massachusetts, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper presents a formulation for swarm control and high-level task planning that is dynamically responsive to user commands and adaptable to environmental changes. We design an end-to-end pipeline from a tactile tablet interface for user commands to onboard control of robotic agents based on decentralized ergodic coverage. Our approach demonstrates reliable and dynamic control of a swarm collective through the use of ergodic specifications for planning and executing agent trajectories as well as responding to user and external inputs. We validate our approach in a virtual reality simulation environment objectives in real-time. and in real-world experiments at the DARPA OFFSET Urban Swarm Challenge FX3 field tests with a robotic swarm where user-based control of the swarm and mission-based tasks require a dynamic and flexible response to changing conditions and objectives in real-time. 
    more » « less
  2. Natural language understanding for robotics can require substantial domain- and platform-specific engineering. For example, for mobile robots to pick-and-place objects in an environment to satisfy human commands, we can specify the language humans use to issue such commands, and connect concept words like red can to physical object properties. One way to alleviate this engineering for a new domain is to enable robots in human environments to adapt dynamically -- continually learning new language constructions and perceptual concepts. In this work, we present an end-to-end pipeline for translating natural language commands to discrete robot actions, and use clarification dialogs to jointly improve language parsing and concept grounding. We train and evaluate this agent in a virtual setting on Amazon Mechanical Turk, and we transfer the learned agent to a physical robot platform to demonstrate it in the real world. 
    more » « less
  3. We present a game benchmark for testing human- swarm control algorithms and interfaces in a real-time, high- cadence scenario. Our benchmark consists of a swarm vs. swarm game in a virtual ROS environment in which the goal of the game is to “capture” all agents from the opposing swarm; the game’s high-cadence is a result of the capture rules, which cause agent team sizes to fluctuate rapidly. These rules require players to consider both the number of agents currently at their disposal and the behavior of their opponent’s swarm when they plan actions. We demonstrate our game benchmark with a default human-swarm control system that enables a player to interact with their swarm through a high-level touchscreen interface. The touchscreen interface transforms player gestures into swarm control commands via a low-level decentralized ergodic control framework. We compare our default human- swarm control system to a flocking-based control system, and discuss traits that are crucial for swarm control algorithms and interfaces operating in real-time, high-cadence scenarios like our game benchmark. Our game benchmark code is available on Github; more information can be found at https: //sites.google.com/view/swarm- game- benchmark. 
    more » « less
  4. Wilde N.; Alonso-Mora J.; Brown D.; Mattson C.; Sycara K. (Ed.)
    In this paper, we introduce an innovative approach to multi-human robot interaction, leveraging the capabilities of omnicopters. These agile aerial vehicles are poised to revolutionize haptic feedback by offering complex sensations with 6 degrees of freedom (6DoF) movements. Unlike traditional systems, our envisioned method enables haptic rendering without the need for tilt, offering a more intuitive and seamless interaction experience. Furthermore, we propose using omnicopter swarms in human-robot interaction, these omnicopters can collaboratively emulate and render intricate objects in real-time. This swarm-based rendering not only expands the realm of tangible human-robot interactions but also holds potential in diverse applications, from immersive virtual environments to tactile guidance in physical tasks. Our vision outlines a future where robots and humans interact in more tangible and sophisticated ways, pushing the boundaries of current haptic technology. 
    more » « less
  5. During a natural disaster such as hurricane, earth- quake, or fire, robots have the potential to explore vast areas and provide valuable aid in search & rescue efforts. These scenar- ios are often high-pressure and time-critical with dynamically- changing task goals. One limitation to these large scale deploy- ments is effective human-robot interaction. Prior work shows that collaboration between one human and one robot benefits from shared control. Here we evaluate the efficacy of shared control for human-swarm teaming in an immersive virtual reality environment. Although there are many human-swarm interaction paradigms, few are evaluated in high-pressure settings representative of their intended end use. We have developed an open-source virtual reality testbed for realistic evaluation of human-swarm teaming performance under pressure. We conduct a user study (n=16) comparing four human-swarm paradigms to a baseline condition with no robotic assistance. Shared control significantly reduces the number of instructions needed to operate the robots. While shared control leads to marginally improved team performance in experienced participants, novices perform best when the robots are fully autonomous. Our experimental results suggest that in immersive, high-pressure settings, the benefits of robotic assistance may depend on how the human and robots interact and the human operator’s expertise. 
    more » « less