We present a game benchmark for testing human- swarm control algorithms and interfaces in a real-time, high- cadence scenario. Our benchmark consists of a swarm vs. swarm game in a virtual ROS environment in which the goal of the game is to “capture” all agents from the opposing swarm; the game’s high-cadence is a result of the capture rules, which cause agent team sizes to fluctuate rapidly. These rules require players to consider both the number of agents currently at their disposal and the behavior of their opponent’s swarm when they plan actions. We demonstrate our game benchmark with a default human-swarm control system that enables a player to interact with their swarm through a high-level touchscreen interface. The touchscreen interface transforms player gestures into swarm control commands via a low-level decentralized ergodic control framework. We compare our default human- swarm control system to a flocking-based control system, and discuss traits that are crucial for swarm control algorithms and interfaces operating in real-time, high-cadence scenarios like our game benchmark. Our game benchmark code is available on Github; more information can be found at https: //sites.google.com/view/swarm- game- benchmark. 
                        more » 
                        « less   
                    
                            
                            Ergodic Specifications for Flexible Swarm Control: From User Commands to Persistent Adaptation
                        
                    
    
            This paper presents a formulation for swarm control and high-level task planning that is dynamically responsive to user commands and adaptable to environmental changes. We design an end-to-end pipeline from a tactile tablet interface for user commands to onboard control of robotic agents based on decentralized ergodic coverage. Our approach demonstrates reliable and dynamic control of a swarm collective through the use of ergodic specifications for planning and executing agent trajectories as well as responding to user and external inputs. We validate our approach in a virtual reality simulation environment objectives in real-time. and in real-world experiments at the DARPA OFFSET Urban Swarm Challenge FX3 field tests with a robotic swarm where user-based control of the swarm and mission-based tasks require a dynamic and flexible response to changing conditions and objectives in real-time. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1837515
- PAR ID:
- 10175963
- Date Published:
- Journal Name:
- Robotics: Science and Systems
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            To walk over constrained environments, bipedal robots must meet concise control objectives of speed and foot placement. The decisions made at the current step need to factor in their effects over a time horizon. Such step-to-step control is formulated as a two-point boundary value problem (2-BVP). As the dimensionality of the biped increases, it becomes increasingly difficult to solve this 2-BVP in real-time. The common method to use a simple linearized model for real-time planning followed by mapping on the high dimensional model cannot capture the nonlinearities and leads to potentially poor performance for fast walking speeds. In this paper, we present a framework for real-time control based on using partial feedback linearization (PFL) for model reduction, followed by a data-driven approach to find a quadratic polynomial model for the 2-BVP. This simple step-tostep model along with constraints is then used to formulate and solve a quadratically constrained quadratic program to generate real-time control commands. We demonstrate the efficacy of the approach in simulation on a 5-link biped following a reference velocity profile and on a terrain with ditches.more » « less
- 
            Abstract For a wide variety of envisioned humanitarian and commercial applications that involve a human user commanding a swarm of robotic systems, developing human-swarm interaction (HSI) principles and interfaces calls for systematic virtual environments to study such HSI implementations. Specifically, such studies are fundamental to achieving HSI that is operationally efficient and can facilitate trust calibration through the collection-use-modeling of cognitive information. However, there is a lack of such virtual environments, especially in the context of studying HSI in different operationally relevant contexts. Building on our previous work in swarm simulation and computer game-based HSI, this paper develops a comprehensive virtual environment to study HSI under varying swarm size, swarm compliance, and swarm-to-human feedback. This paper demonstrates how this simulation environment informs the development of an indoor physical (experimentation) environment to evaluate the human cognitive model. New approaches are presented to simulate physical assets based on physical experiment-based calibration and the effects that this presents on the human users. Key features of the simulation environment include medium fidelity simulation of large teams of small aerial and ground vehicles (based on the Pybullet engine), a graphical user interface to receive human command and provide feedback (from swarm assets) to human in the case of non-compliance with commands, and a lab-streaming layer to synchronize physiological data collection (e.g., related to brain activity and eye gaze) with swarm state and human commands.more » « less
- 
            Dexterous telemanipulation is crucial in advancing human-robot systems, especially in tasks requiring precise and safe manipulation. However, it faces significant challenges due to the physical differences between human and robotic hands, the dynamic interaction with objects, and the indirect control and perception of the remote environment. Current approaches predominantly focus on mapping the human hand onto robotic counterparts to replicate motions, which exhibits a critical oversight: it often neglects the physical interaction with objects and relegates the interaction burden to the human to adapt and make laborious adjustments in response to the indirect and counter-intuitive observation of the remote environment. This work develops an End-Effects-Oriented Learning-based Dexterous Telemanipulation (EFOLD) framework to address telemanipulation tasks. EFOLD models telemanipulation as a Markov Game, introducing multiple end-effect features to interpret the human operator’s commands during interaction with objects. These features are used by a Deep Reinforcement Learning policy to control the robot and reproduce such end effects. EFOLD was evaluated with real human subjects and two end-effect extraction methods for controlling a virtual Shadow Robot Hand in telemanipulation tasks. EFOLD achieved real-time control capability with low command following latency (delay<0.11s) and highly accurate tracking (MSE<0.084 rad).more » « less
- 
            During a natural disaster such as hurricane, earth- quake, or fire, robots have the potential to explore vast areas and provide valuable aid in search & rescue efforts. These scenar- ios are often high-pressure and time-critical with dynamically- changing task goals. One limitation to these large scale deploy- ments is effective human-robot interaction. Prior work shows that collaboration between one human and one robot benefits from shared control. Here we evaluate the efficacy of shared control for human-swarm teaming in an immersive virtual reality environment. Although there are many human-swarm interaction paradigms, few are evaluated in high-pressure settings representative of their intended end use. We have developed an open-source virtual reality testbed for realistic evaluation of human-swarm teaming performance under pressure. We conduct a user study (n=16) comparing four human-swarm paradigms to a baseline condition with no robotic assistance. Shared control significantly reduces the number of instructions needed to operate the robots. While shared control leads to marginally improved team performance in experienced participants, novices perform best when the robots are fully autonomous. Our experimental results suggest that in immersive, high-pressure settings, the benefits of robotic assistance may depend on how the human and robots interact and the human operator’s expertise.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    