skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: How to be Helpful? Supportive Behaviors and Personalization for Human-Robot Collaboration
The field of Human-Robot Collaboration (HRC) has seen a considerable amount of progress in recent years. Thanks in part to advances in control and perception algorithms, robots have started to work in increasingly unstructured environments, where they operate side by side with humans to achieve shared tasks. However, little progress has been made toward the development of systems that are truly effective in supporting the human, proactive in their collaboration, and that can autonomously take care of part of the task. In this work, we present a collaborative system capable of assisting a human worker despite limited manipulation capabilities, incomplete model of the task, and partial observability of the environment. Our framework leverages information from a high-level, hierarchical model that is shared between the human and robot and that enables transparent synchronization between the peers and mutual understanding of each other’s plan. More precisely, we firstly derive a partially observable Markov model from the high-level task representation; we then use an online Monte-Carlo solver to compute a short-horizon robot-executable plan. The resulting policy is capable of interactive replanning on-the-fly, dynamic error recovery, and identification of hidden user preferences. We demonstrate that the system is capable of robustly providing support to the human in a realistic furniture construction task.  more » « less
Award ID(s):
1955653 1928448 1813651
PAR ID:
10354178
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Frontiers in Robotics and AI
Volume:
8
ISSN:
2296-9144
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Robots are increasingly being employed for diverse applications where they must work and coexist with humans. The trust in human–robot collaboration (HRC) is a critical aspect of any shared-task performance for both the human and the robot. The study of a human-trusting robot has been investigated by numerous researchers. However, a robot-trusting human, which is also a significant issue in HRC, is seldom explored in the field of robotics. Motivated by this gap, we propose a novel trust-assist framework for human–robot co-carry tasks in this study. This framework allows the robot to determine a trust level for its human co-carry partner. The calculations of this trust level are based on human motions, past interactions between the human–robot pair, and the human’s current performance in the co-carry task. The trust level between the human and the robot is evaluated dynamically throughout the collaborative task, and this allows the trust to change if the human performs false positive actions, which can help the robot avoid making unpredictable movements and causing injury to the human. Additionally, the proposed framework can enable the robot to generate and perform assisting movements to follow human-carrying motions and paces when the human is considered trustworthy in the co-carry task. The results of our experiments suggest that the robot effectively assists the human in real-world collaborative tasks through the proposed trust-assist framework. 
    more » « less
  2. In order to achieve effective human-AI collaboration, it is necessary for an AI agent to align its behavior with the human's expectations. When the agent generates a task plan without such considerations, it may often result in inexplicable behavior from the human's point of view. This may have serious implications for the human, from increased cognitive load to more serious concerns of safety around the physical agent. In this work, we present an approach to generate explicable behavior by minimizing the distance between the agent's plan and the plan expected by the human. To this end, we learn a mapping between plan distances (distances between expected and agent plans) and human's plan scoring scheme. The plan generation process uses this learned model as a heuristic. We demonstrate the effectiveness of our approach in a delivery robot domain. 
    more » « less
  3. Product disassembly is essential for remanufacturing operations and recovery of end-of-use devices. However, disassembly has often been performed manually with significant safety issues for human workers. Recently, human-robot collaboration has become popular to reduce the human workload and handle hazardous materials. However, due to the current limitations of robots, they are not fully capable of performing every disassembly task. It is critical to determine whether a robot can accomplish a specific disassembly task. This study develops a disassembly score which represents how easy is to disassemble a component by robots, considering the attributes of the component along with the robotic capability. Five factors, including component weight, shape, size, accessibility, and positioning, are considered when developing the disassembly score. Further, the relationship between the five factors and robotic capabilities, such as grabbing and placing, is discussed. The MaxViT (Multi-Axis Vision Transformer) model is used to determine component sizes through image processing of the XPS 8700 desktop, demonstrating the potential for automating disassembly score generation. Moreover, the proposed disassembly score is discussed in terms of determining the appropriate work setting for disassembly operations, under three main categories: human-robot collaboration (HRC), semi-HRC, and worker-only settings. A framework for calculating disassembly time, considering human-robot collaboration, is also proposed. 
    more » « less
  4. To enable sophisticated interactions between humans and robots in a shared environment, robots must infer the intentions and strategies of their human counterparts. This inference can provide a competitive edge to the robot or enhance human-robot collaboration by reducing the necessity for explicit communication about task decisions. In this work, we identify specific states within the shared environment, which we refer to as Critical Decision Points, where the actions of a human would be especially indicative of their high-level strategy. A robot can significantly reduce uncertainty regarding the human’s strategy by observing actions at these points. To demonstrate the practical value of Critical Decision Points, we propose a Receding Horizon Planning (RHP) approach for the robot to influence the movement of a human opponent in a competitive game of hide-and-seek in a partially observable setting. The human plays as the hider and the robot plays as the seeker. We show that the seeker can influence the hider to move towards Critical Decision Points, and this can facilitate a more accurate estimation of the hider’s strategy. In turn, this helps the seeker catch the hider faster than estimating the hider’s strategy whenever the hider is visible or when the seeker only optimizes for minimizing its distance to the hider. 
    more » « less
  5. Robots can influence people to accomplish their tasks more efficiently: autonomous cars can inch forward at an intersection to pass through, and tabletop manipulators can go for an object on the table first. However, a robot's ability to influence can also compromise the physical safety of nearby people if naively executed. In this work, we pose and solve a novel robust reach-avoid dynamic game which enables robots to be maximally influential, but only when a safety backup control exists. On the human side, we model the human's behavior as goal-driven but conditioned on the robot's plan, enabling us to capture influence. On the robot side, we solve the dynamic game in the joint physical and belief space, enabling the robot to reason about how its uncertainty in human behavior will evolve over time. We instantiate our method, called SLIDE (Safely Leveraging Influence in Dynamic Environments), in a high-dimensional (39-D) simulated human-robot collaborative manipulation task solved via offline game-theoretic reinforcement learning. We compare our approach to a robust baseline that treats the human as a worst-case adversary, a safety controller that does not explicitly reason about influence, and an energy-function-based safety shield. We find that SLIDE consistently enables the robot to leverage the influence it has on the human when it is safe to do so, ultimately allowing the robot to be less conservative while still ensuring a high safety rate during task execution. Project website: https://cmu-intentlab.github.io/safe-influence/ 
    more » « less