Shared autonomy provides an effective framework for human-robot collaboration that takes advantage of the complementary strengths of humans and robots to achieve common goals. Many existing approaches to shared autonomy make restrictive assumptions that the goal space, environment dynamics, or human policy are known a priori, or are limited to discrete action spaces, preventing those methods from scaling to complicated real world environments. We propose a model-free, residual policy learning algorithm for shared autonomy that alleviates the need for these assumptions. Our agents are trained to minimally adjust the human’s actions such that a set of goal-agnostic constraints are satisfied. We test our method in two continuous control environments: Lunar Lander, a 2D flight control domain, and a 6-DOF quadrotor reaching task. In experiments with human and surrogate pilots, our method significantly improves task performance without any knowledge of the human’s goal beyond the constraints. These results highlight the ability of model-free deep reinforcement learning to realize assistive agents suited to continuous control settings with little knowledge of user intent. 
                        more » 
                        « less   
                    
                            
                            HARMONIC: A multimodal dataset of assistive human–robot collaboration
                        
                    
    
            We present the Human And Robot Multimodal Observations of Natural Interactive Collaboration (HARMONIC) dataset. This is a large multimodal dataset of human interactions with a robotic arm in a shared autonomy setting designed to imitate assistive eating. The dataset provides human, robot, and environmental data views of 24 different people engaged in an assistive eating task with a 6-degree-of-freedom (6-DOF) robot arm. From each participant, we recorded video of both eyes, egocentric video from a head-mounted camera, joystick commands, electromyography from the forearm used to operate the joystick, third-person stereo video, and the joint positions of the 6-DOF robot arm. Also included are several features that come as a direct result of these recordings, such as eye gaze projected onto the egocentric video, body pose, hand pose, and facial keypoints. These data streams were collected specifically because they have been shown to be closely related to human mental states and intention. This dataset could be of interest to researchers studying intention prediction, human mental state modeling, and shared autonomy. Data streams are provided in a variety of formats such as video and human-readable CSV and YAML files. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1943072
- PAR ID:
- 10361751
- Publisher / Repository:
- SAGE Publications
- Date Published:
- Journal Name:
- The International Journal of Robotics Research
- Volume:
- 41
- Issue:
- 1
- ISSN:
- 0278-3649
- Format(s):
- Medium: X Size: p. 3-11
- Size(s):
- p. 3-11
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Humans are well-adept at navigating public spaces shared with others, where current autonomous mobile robots still struggle: while safely and efficiently reaching their goals, humans communicate their intentions and conform to unwritten social norms on a daily basis; conversely, robots become clumsy in those daily social scenarios, getting stuck in dense crowds, surprising nearby pedestrians, or even causing collisions. While recent research on robot learning has shown promises in data-driven social robot navigation, good-quality training data is still difficult to acquire through either trial and error or expert demonstrations. In this work, we propose to utilize the body of rich, widely available, social human navigation data in many natural human-inhabited public spaces for robots to learn similar, human-like, socially compliant navigation behaviors. To be specific, we design an open-source egocentric data collection sensor suite wearable by walking humans to provide multimodal robot perception data; we collect a large-scale (~100 km, 20 hours, 300 trials, 13 humans) dataset in a variety of public spaces which contain numerous natural social navigation interactions; we analyze our dataset, demonstrate its usability, and point out future research directions and use cases.11Website: https://cs.gmu.edu/-xiao/Research/MuSoHu/more » « less
- 
            Underwater ROVs (Remotely Operated Vehicles) are unmanned submersibles designed for exploring and operating in the depths of the ocean. Despite using high-end cameras, typical teleoperation engines based on first-person (egocentric) views limit a surface operator’s ability to maneuver the ROV in complex deep-water missions. In this paper, we present an interactive teleoperation interface that enhances the operational capabilities via increased situational awareness. This is accomplished by (i) offering on-demand third-person (exocentric) visuals from past egocentric views, and (ii) facilitating enhanced peripheral information with augmented ROV pose in real-time. We achieve this by integrating a 3D geometry-based Ego-to-Exo view synthesis algorithm into a monocular SLAM system for accurate trajectory estimation. The proposed closed-form solution only uses past egocentric views from the ROV and a SLAM backbone for pose estimation, which makes it portable to existing ROV platforms. Unlike data-driven solutions, it is invariant to applications and waterbody-specific scenes. We validate the geometric accuracy of the proposed framework through extensive experiments of 2-DOF indoor navigation and 6-DOF underwater cave exploration in challenging low-light conditions. A subjective evaluation on 15 human teleoperators further confirms the effectiveness of the integrated features for improved teleoperation. We demonstrate the benefits of dynamic Ego-to-Exo view generation and real-time pose rendering for remote ROV teleoperation by following navigation guides such as cavelines inside underwater caves. This new way of interactive ROV teleoperation opens up promising opportunities for future research in subsea telerobotics.more » « less
- 
            Abstract This paper presents the design and validation of a wearable shoulder exoskeleton robot intended to serve as a platform for assistive controllers that can mitigate the risk of musculoskeletal disorders seen in workers. The design features a four-bar mechanism that moves the exoskeleton’s center of mass from the upper shoulders to the user’s torso, dual-purpose gravity compensation mechanism located inside the four-bar’s linkages that supports the full gravitational loading from the exoskeleton with partial user’s arm weight compensation, and a novel 6 degree-of-freedom (DoF) compliant misalignment compensation mechanism located between the end effector and the user’s arm to allow shoulder translation while maintaining control of the arm’s direction. Simulations show the four-bar design lowers the center of mass by$$ 11 $$ cm and the kinematic chain can follow the motion of common upper arm trajectories. Experimental tests show the gravity compensation mechanism compensates gravitational loading within$$ \pm 0.5 $$ Nm over the range of shoulder motion and the misalignment compensation mechanism has the desired 6 DoF stiffness characteristics and range of motion to adjust for shoulder center translation. Finally, a workspace admittance controller was implemented and evaluated showing the system is capable of accurately reproducing simulated impedance behavior with transparent low-impedance human operation.more » « less
- 
            null (Ed.)This paper addresses the problem of autonomously deploying an unmanned aerial vehicle in non-trivial settings, by leveraging a manipulator arm mounted on a ground robot, acting as a versatile mobile launch platform. As real-world deployment scenarios for micro aerial vehicles such as searchand- rescue operations often entail exploration and navigation of challenging environments including uneven terrain, cluttered spaces, or even constrained openings and passageways, an often arising problem is that of ensuring a safe take-off location, or safely fitting through narrow openings while in flight. By facilitating launching from the manipulator end-effector, a 6- DoF controllable take-off pose within the arm workspace can be achieved, which allows to properly position and orient the aerial vehicle to initialize the autonomous flight portion of a mission. To accomplish this, we propose a sampling-based planner that respects a) the kinematic constraints of the ground robot / manipulator / aerial robot combination, b) the geometry of the environment as autonomously mapped by the ground robots perception systems, and c) accounts for the aerial robot expected dynamic motion during takeoff. The goal of the proposed planner is to ensure autonomous collision-free initialization of an aerial robotic exploration mission, even within a cluttered constrained environment. At the same time, the ground robot with the mounted manipulator can be used to appropriately position the take-off workspace into areas of interest, effectively acting as a carrier launch platform. We experimentally demonstrate this novel robotic capability through a sequence of experiments that encompass a micro aerial vehicle platform carried and launched from a 6-DoF manipulator arm mounted on a four-wheel robot base.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
