Human–exoskeleton interactions have the potential to bring about changes in human behavior for physical rehabilitation or skill augmentation. Despite significant advances in the design and control of these robots, their application to human training remains limited. The key obstacles to the design of such training paradigms are the prediction of human–exoskeleton interaction effects and the selection of interaction control to affect human behavior. In this article, we present a method to elucidate behavioral changes in the human–exoskeleton system and identify expert behaviors correlated with a task goal. Specifically, we observe the joint coordinations of the robot, also referred to as kinematic coordination behaviors, that emerge from human–exoskeleton interaction during learning. We demonstrate the use of kinematic coordination behaviors with two task domains through a set of three human-subject studies. We find that participants (1) learn novel tasks within the exoskeleton environment, (2) demonstrate similarity of coordination during successful movements within participants, (3) learn to leverage these coordination behaviors to maximize success within participants, and (4) tend to converge to similar coordinations for a given task strategy across participants. At a high level, we identify task-specific joint coordinations that are used by different experts for a given task goal. These coordinations can be quantified by observing experts and the similarity to these coordinations can act as a measure of learning over the course of training for novices. The observed expert coordinations may further be used in the design of adaptive robot interactions aimed at teaching a participant the expert behaviors.
more » « less- Award ID(s):
- 2019704
- PAR ID:
- 10425859
- Publisher / Repository:
- Nature Publishing Group
- Date Published:
- Journal Name:
- Scientific Reports
- Volume:
- 13
- Issue:
- 1
- ISSN:
- 2045-2322
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Exoskeleton robots are capable of safe torque- controlled interactions with a wearer while moving their limbs through pre-defined trajectories. However, affecting and assist- ing the wearer’s movements while incorporating their inputs (effort and movements) effectively during an interaction re- mains an open problem due to the complex and variable nature of human motion. In this paper, we present a control algorithm that leverages task-specific movement behaviors to control robot torques during unstructured interactions by implementing a force field that imposes a desired joint angle coordination behavior. This control law, built by using principal component analysis (PCA), is implemented and tested with the Harmony exoskeleton. We show that the proposed control law is versatile enough to allow for the imposition of different coordination behaviors with varying levels of impedance stiffness. We also test the feasibility of our method for unstructured human-robot interaction. Specifically, we demonstrate that participants in a human-subject experiment are able to effectively perform reaching tasks while the exoskeleton imposes the desired joint coordination under different movement speeds and interaction modes. Survey results further suggest that the proposed control law may offer a reduction in cognitive or motor effort. This control law opens up the possibility of using the exoskeleton for training the participating in accomplishing complex mmore » « less
-
When faced with accomplishing a task, human experts exhibit intentional behavior. Their unique intents shape their plans and decisions, resulting in experts demonstrating diverse behaviors to accomplish the same task. Due to the uncertainties encountered in the real world and their bounded rationality, experts sometimes adjust their intents, which in turn influences their behaviors during task execution. This paper introduces IDIL, a novel imitation learning algorithm to mimic these diverse intent-driven behaviors of experts. Iteratively, our approach estimates expert intent from heterogeneous demonstrations and then uses it to learn an intent-aware model of their behavior. Unlike contemporary approaches, IDIL is capable of addressing sequential tasks with high-dimensional state representations, while sidestepping the complexities and drawbacks associated with adversarial training (a mainstay of related techniques). Our empirical results suggest that the models generated by IDIL either match or surpass those produced by recent imitation learning benchmarks in metrics of task performance. Moreover, as it creates a generative model, IDIL demonstrates superior performance in intent inference metrics, crucial for human-agent interactions, and aptly captures a broad spectrum of expert behaviors.more » « less
-
What Happens When Robots Punish? Evaluating Human Task Performance During Robot-Initiated PunishmentThis article examines how people respond to robot-administered verbal and physical punishments. Human participants were tasked with sorting colored chips under time pressure and were punished by a robot when they made mistakes, such as inaccurate sorting or sorting too slowly. Participants were either punished verbally by being told to stop sorting for a fixed time, or physically, by restraining their ability to sort with an in-house crafted robotic exoskeleton. Either a human experimenter or the robot exoskeleton administered punishments, with participant task performance and subjective perceptions of their interaction with the robot recorded. The results indicate that participants made more mistakes on the task when under the threat of robot-administered punishment. Participants also tended to comply with robot-administered punishments at a lesser rate than human-administered punishments, which suggests that humans may not afford a robot the social authority to administer punishments. This study also contributes to our understanding of compliance with a robot and whether people accept a robot’s authority to punish. The results may influence the design of robots placed in authoritative roles and promote discussion of the ethical ramifications of robot-administered punishment.more » « less
-
In this paper, we propose a human-automation interaction scheme to improve the task performance of novice human users with different skill levels. The proposed scheme includes two interaction modes: learn from experts mode and assist novices mode. In the learn from experts mode, the automation learns from a human expert user such that the awareness of task objective is obtained. Based on the learned task objective, in the assist novices mode, the automation customizes its control parameter to assist a novice human user towards emulating the performance of the expert human user. We experimentally test the proposed human-automation scheme in a designed quadrotor simulation environment, and the results show that the proposed approach is capable of adapting to and assisting the novice human user to achieve the performance that emulates the expert human user.more » « less
-
Abstract This paper explores the kinematic synthesis, design, and pilot experimental testing of a six-legged walking robotic platform able to traverse through different terrains. We aim to develop a structured approach to designing the limb morphology using a relaxed kinematic task with incorporated conditions on foot-environments interaction, specifically contact force direction and curvature constraints, related to maintaining contact. The design approach builds up incrementally starting with studying the basic human leg walking trajectory and then defining a “relaxed” kinematic task. The “relaxed” kinematic task consists only of two contact locations (toe-off and heel-strike) with higher-order motion task specifications compatible with foot-terrain(s) contact and curvature constraints in the vicinity of the two contacts. As the next step, an eight-bar leg image is created based on the “relaxed” kinematic task and incorporated within a six-legged walking robot. Pilot experimental tests explore if the proposed approach results in an adaptable behavior which allows the platform to incorporate different walking foot trajectories and gait styles coupled to each environment. The results suggest that the proposed “relaxed” higher-order motion task combined with the leg morphological properties and feet material allowed the platform to walk stably on the different terrains. Here we would like to note that one of the main advantages of the proposed method in comparison with other existing walking platforms is that the proposed robotic platform has carefully designed limb morphology with incorporated conditions on foot-environment interaction. Additionally, while most of the existing multilegged platforms incorporate one actuator per leg, or per joint, our goal is to explore the possibility of using a single actuator to drive all six legs of the platform. This is a critical step which opens the door for the development of future transformative technology that is largely independent of human control and able to learn about the environment through their own sensory systems.more » « less