In this paper, we analyze and report on observable trends in human-human dyads performing collaborative manipulation (co-manipulation) tasks with an extended object (object with significant length). We present a detailed analysis relating trends in interaction forces and torques with other metrics and propose that these trends could provide a way of improving communication and efficiency for human-robot dyads. We find that the motion of the co-manipulated object has a measurable oscillatory component. We confirm that haptic feedback alone represents a sufficient communication channel for co-manipulation tasks, however we find that the loss of visual and auditory channels has a significant effect on interaction torque and velocity. The main objective of this paper is to lay the essential groundwork in defining principles of co-manipulation between human dyads. We propose that these principles could enable effective and intuitive human-robot collaborative manipulation in future co-manipulation research.
more »
« less
Human-robot planar co-manipulation of extended objects: data-driven models and control from human-human dyads
Human teams are able to easily perform collaborative manipulation tasks. However, simultaneously manipulating a large extended object for a robot and human is a difficult task due to the inherent ambiguity in the desired motion. Our approach in this paper is to leverage data from human-human dyad experiments to determine motion intent for a physical human-robot co-manipulation task. We do this by showing that the human-human dyad data exhibits distinct torque triggers for a lateral movement. As an alternative intent estimation method, we also develop a deep neural network based on motion data from human-human trials to predict future trajectories based on past object motion. We then show how force and motion data can be used to determine robot control in a human-robot dyad. Finally, we compare human-human dyad performance to the performance of two controllers that we developed for human-robot co-manipulation. We evaluate these controllers in three-degree-of-freedom planar motion where determining if the task involves rotation or translation is ambiguous.
more »
« less
- Award ID(s):
- 2024792
- PAR ID:
- 10561970
- Publisher / Repository:
- Frontiers in Neurorobotics
- Date Published:
- Journal Name:
- Frontiers in Neurorobotics
- Volume:
- 18
- ISSN:
- 1662-5218
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Despite the existence of robots that can physically lift heavy loads, robots that can collaborate with people to move heavy objects are not readily available. This article makes progress toward effective human-robot co-manipulation by studying 30 human-human dyads that collaboratively manipulated an object weighing\(27 \mathrm{kg}\)without being co-located (i.e., participants were at either end of the extended object). Participants maneuvered around different obstacles with the object while exhibiting one of four modi–the manner or objective with which a team moves an object together–at any given time. Using force and motion signals to classify modus or behavior was the primary objective of this work. Our results showed that two of the originally proposed modi were very similar, such that one could effectively be removed while still spanning the space of common behaviors during our co-manipulation tasks. The three modi used in classification werequickly,smoothlyandavoiding obstacles. Using a deep convolutional neural network (CNN), we classified three modi with up to 89% accuracy from a validation set. The capability to detect or classify modus during co-manipulation has the potential to greatly improve human-robot performance by helping to define appropriate robot behavior or controller parameters depending on the objective or modus of the team.more » « less
-
The effectiveness of human-robot interactions critically depends on the success of computational efforts to emulate human inference of intent, anticipation of action, and coordination of movement. To this end, we developed two models that leverage a well described feature of human movement: Gaussian-shaped submovements in velocity profiles, to act as robotic surrogates for human inference and trajectory planning in a handover task. We evaluated both models based on how early in a handover movement the inference model can obtain accurate estimates of handover location and timing, and how similar model trajectories are to human receiver trajectories. Initial results using one participant dyad demonstrate that our inference model can accurately predict location and handover timing, while the trajectory planner can use these predictions to provide a human-like trajectory plan for the robot. This approach delivers promising performance while remaining grounded in physiologically meaningful Gaussian-shaped velocity profiles of human motion.more » « less
-
Abstract Human intention prediction plays a critical role in human–robot collaboration, as it helps robots improve efficiency and safety by accurately anticipating human intentions and proactively assisting with tasks. While current applications often focus on predicting intent once human action is completed, recognizing human intent in advance has received less attention. This study aims to equip robots with the capability to forecast human intent before completing an action, i.e., early intent prediction. To achieve this objective, we first extract features from human motion trajectories by analyzing changes in human joint distances. These features are then utilized in a Hidden Markov Model (HMM) to determine the state transition times from uncertain intent to certain intent. Second, we propose two models including a Transformer and a Bi-LSTM for classifying motion intentions. Then, we design a human–robot collaboration experiment in which the operator reaches multiple targets while the robot moves continuously following a predetermined path. The data collected through the experiment were divided into two groups: full-length data and partial data before state transitions detected by the HMM. Finally, the effectiveness of the suggested framework for predicting intentions is assessed using two different datasets, particularly in a scenario when motion trajectories are similar but underlying intentions vary. The results indicate that using partial data prior to the motion completion yields better accuracy compared to using full-length data. Specifically, the transformer model exhibits a 2% improvement in accuracy, while the Bi-LSTM model demonstrates a 6% increase in accuracy.more » « less
-
Wagner, A.R.; null (Ed.)Collaborative robots that provide anticipatory assistance are able to help people complete tasks more quickly. As anticipatory assistance is provided before help is explicitly requested, there is a chance that this action itself will influence the person’s future decisions in the task. In this work, we investigate whether a robot’s anticipatory assistance can drive people to make choices different from those they would otherwise make. Such a study requires measuring intent, which itself could modify intent, resulting in an observer paradox. To combat this, we carefully designed an experiment to avoid this effect. We considered several mitigations such as the careful choice of which human behavioral signals we use to measure intent and designing unobtrusive ways to obtain these signals. We conducted a user study (𝑁=99) in which participants completed a collaborative object retrieval task: users selected an object and a robot arm retrieved it for them. The robot predicted the user’s object selection from eye gaze in advance of their explicit selection, and then provided either collaborative anticipation (moving toward the predicted object), adversarial anticipation (moving away from the predicted object), or no anticipation (no movement, control condition). We found trends and participant comments suggesting people’s decision making changes in the presence of a robot anticipatory motion and this change differs depending on the robot’s anticipation strategy.more » « less
An official website of the United States government

