skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Force-Constrained Visual Policy: Safe Robot-Assisted Dressing via Multi-Modal Sensing
Robot-assisted dressing could profoundly enhance the quality of life of adults with physical disabilities. To achieve this, a robot can benefit from both visual and force sensing. The former enables the robot to ascertain human body pose and garment deformations, while the latter helps maintain safety and comfort during the dressing process. In this paper, we introduce a new technique that leverages both vision and force modalities for this assistive task. Our approach first trains a vision-based dressing policy using reinforcement learning in simulation with varying body sizes, poses, and types of garments. We then learn a force dynamics model for action planning to ensure safety. Due to limitations of simulating accurate force data when deformable garments interact with the human body, we learn a force dynamics model directly from real-world data. Our proposed method combines the vision-based policy, trained in simulation, with the force dynamics model, learned in the real world, by solving a constrained optimization problem to infer actions that facilitate the dressing process without applying excessive force on the person. We evaluate our system in simulation and in a real-world human study with 10 participants across 240 dressing trials, showing it greatly outperforms prior baselines.  more » « less
Award ID(s):
2046491
PAR ID:
10573301
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
arxiv.org
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract For simulation to be an effective tool for the development and testing of autonomous vehicles, the simulator must be able to produce realistic safety-critical scenarios with distribution-level accuracy. However, due to the high dimensionality of real-world driving environments and the rarity of long-tail safety-critical events, how to achieve statistical realism in simulation is a long-standing problem. In this paper, we develop NeuralNDE, a deep learning-based framework to learn multi-agent interaction behavior from vehicle trajectory data, and propose a conflict critic model and a safety mapping network to refine the generation process of safety-critical events, following real-world occurring frequencies and patterns. The results show that NeuralNDE can achieve both accurate safety-critical driving statistics (e.g., crash rate/type/severity and near-miss statistics, etc.) and normal driving statistics (e.g., vehicle speed/distance/yielding behavior distributions, etc.), as demonstrated in the simulation of urban driving environments. To the best of our knowledge, this is the first time that a simulation model can reproduce the real-world driving environment with statistical realism, particularly for safety-critical situations. 
    more » « less
  2. Ideally, we would place a robot in a real-world environment and leave it there improving on its own by gathering more experience autonomously. However, algorithms for autonomous robotic learning have been challenging to realize in the real world. While this has often been attributed to the challenge of sample complexity, even sample-efficient techniques are hampered by two major challenges - the difficulty of providing well "shaped" rewards, and the difficulty of continual reset-free training. In this work, we describe a system for real-world reinforcement learning that enables agents to show continual improvement by training directly in the real world without requiring painstaking effort to hand-design reward functions or reset mechanisms. Our system leverages occasional non-expert human-in-the-loop feedback from remote users to learn informative distance functions to guide exploration while leveraging a simple self-supervised learning algorithm for goal-directed policy learning. We show that in the absence of resets, it is particularly important to account for the current "reachability" of the exploration policy when deciding which regions of the space to explore. Based on this insight, we instantiate a practical learning system - GEAR, which enables robots to simply be placed in real-world environments and left to train autonomously without interruption. The system streams robot experience to a web interface only requiring occasional asynchronous feedback from remote, crowdsourced, non-expert humans in the form of binary comparative feedback. We evaluate this system on a suite of robotic tasks in simulation and demonstrate its effectiveness at learning behaviors both in simulation and the real world. 
    more » « less
  3. Abstract In this paper, we study the effects of mechanical compliance on safety in physical human–robot interaction (pHRI). More specifically, we compare the effect of joint compliance and link compliance on the impact force assuming a contact occurred between a robot and a human head. We first establish pHRI system models that are composed of robot dynamics, an impact contact model, and head dynamics. These models are validated by Simscape simulation. By comparing impact results with a robotic arm made of a compliant link (CL) and compliant joint (CJ), we conclude that the CL design produces a smaller maximum impact force given the same lateral stiffness as well as other physical and geometric parameters. Furthermore, we compare the variable stiffness joint (VSJ) with the variable stiffness link (VSL) for various actuation parameters and design parameters. While decreasing stiffness of CJs cannot effectively reduce the maximum impact force, CL design is more effective in reducing impact force by varying the link stiffness. We conclude that the CL design potentially outperforms the CJ design in addressing safety in pHRI and can be used as a promising alternative solution to address the safety constraints in pHRI. 
    more » « less
  4. Abstract In this study, a 13 degrees of freedom (DOFs) three-dimensional (3D) human arm model and a 10 DOFs 3D robotic arm model are used to validate the grasping force for human-robot lifting motion prediction. The human arm and robotic arm are modeled in Denavit-Hartenberg (DH) representation. In addition, the 3D box is modeled as a floating-base rigid body with 6 global DOFs. The human-box and robot-box interactions are characterized as a collection of grasping forces. The joint torque squares of human arm and robot arm are minimized subjected to physics and task constraints. The design variables include (1) control points of cubic B-splines of joint angle profiles of the human arm, robotic arm, and box; and (2) the discretized grasping forces during lifting. Both numerical and experimental human-robot liftings were performed with a 2 kg box. The simulation reports the human arm’s joint angle profiles, joint torque profiles, and grasping force profiles. The comparisons of the joint angle profiles and grasping force profiles between experiment and simulation are presented. The simulated joint angle profiles have similar trends to the experimental data. It is concluded that human and robot share the load during lifting process, and the predicted human grasping force matches the measured experimental grasping force reasonably well. 
    more » « less
  5. More than 1 billion people in the world are estimated to experience significant disability. These disabilities can impact people's ability to independently conduct activities of daily living, including ambulating, eating, dressing, taking care of personal hygiene, and more. Mobile and manipulator robots, which can move about human environments and physically interact with objects and people, have the potential to assist people with disabilities in activities of daily living. Although the vision of physically assistive robots has motivated research across subfields of robotics for decades, such robots have only recently become feasible in terms of capabilities, safety, and price. More and more research involves end-to-end robotic systems that interact with people with disabilities in real-world settings. In this article, we survey papers about physically assistive robots intended for people with disabilities from top conferences and journals in robotics, human–computer interactions, and accessible technology, to identify the general trends and research methodologies. We then dive into three specific research themes—interaction interfaces, levels of autonomy, and adaptation—and present frameworks for how these themes manifest across physically assistive robot research. We conclude with directions for future research. 
    more » « less