skip to main content

Title: Design Methodology for Robotic Manipulator for Overground Physical Interaction Tasks
We present a new design method that is tailored for designing a physical interactive robotic arm for overground physical interaction. Designing such robotic arms present various unique requirements that differ from existing robotic arms, which are used for general manipulation, such as being able to generate required forces at every point inside the workspace and/or having low intrinsic mechanical impedance. Our design method identifies these requirements and categorizes them into kinematic and dynamic characteristics of the robot and then ensures that these unique considerations are satisfied in the early design phase. The robot’s capability for use in such tasks is analyzed using mathematical simulations of the designed robot, and discussion of its dynamic characteristics is presented. With our proposed method, the robot arm is ensured to perform various overground interactive tasks with a human.
Authors:
;
Award ID(s):
1843892
Publication Date:
NSF-PAR ID:
10207679
Journal Name:
Journal of mechanisms and robotics
Volume:
12
Issue:
4
ISSN:
1942-4302
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    In this paper, an optimization-based dynamic modeling method is used for human-robot lifting motion prediction. The three-dimensional (3D) human arm model has 13 degrees of freedom (DOFs) and the 3D robotic arm (Sawyer robotic arm) has 10 DOFs. The human arm and robotic arm are built in Denavit-Hartenberg (DH) representation. In addition, the 3D box is modeled as a floating-base rigid body with 6 global DOFs. The interactions between human arm and box, and robot and box are modeled as a set of grasping forces which are treated as unknowns (design variables) in the optimization formulation. The inverse dynamic optimization is used to simulate the lifting motion where the summation of joint torque squares of human arm is minimized subjected to physical and task constraints. The design variables are control points of cubic B-splines of joint angle profiles of the human arm, robotic arm, and box, and the box grasping forces at each time point. A numerical example is simulated for huma-robot lifting with a 10 Kg box. The human and robotic arms’ joint angle, joint torque, and grasping force profiles are reported. These optimal outputs can be used as references to control the human-robot collaborative lifting task.

  2. Human-centered environments provide affordances for and require the use of two-handed, or bimanual, manipulations. Robots designed to function in, and physically interact with, these environments have not been able to meet these requirements because standard bimanual control approaches have not accommodated the diverse, dynamic, and intricate coordinations between two arms to complete bimanual tasks. In this work, we enabled robots to more effectively perform bimanual tasks by introducing a bimanual shared-control method. The control method moves the robot’s arms to mimic the operator’s arm movements but provides on-the-fly assistance to help the user complete tasks more easily. Our method used a bimanual action vocabulary, constructed by analyzing how people perform two-hand manipulations, as the core abstraction level for reasoning about how to assist in bimanual shared autonomy. The method inferred which individual action from the bimanual action vocabulary was occurring using a sequence-to-sequence recurrent neural network architecture and turned on a corresponding assistance mode, signals introduced into the shared-control loop designed to make the performance of a particular bimanual action easier or more efficient. We demonstrate the effectiveness of our method through two user studies that show that novice users could control a robot to complete a range of complexmore »manipulation tasks more successfully using our method compared to alternative approaches. We discuss the implications of our findings for real-world robot control scenarios.« less
  3. Principles from human-human physical interaction may be necessary to design more intuitive and seamless robotic devices to aid human movement. Previous studies have shown that light touch can aid balance and that haptic communication can improve performance of physical tasks, but the effects of touch between two humans on walking balance has not been previously characterized. This study examines physical interaction between two persons when one person aids another in performing a beam-walking task. 12 pairs of healthy young adults held a force sensor with one hand while one person walked on a narrow balance beam (2 cm wide x 3.7 m long) and the other person walked overground by their side. We compare balance performance during partnered vs. solo beam-walking to examine the effects of haptic interaction, and we compare hand interaction mechanics during partnered beam-walking vs. overground walking to examine how the interaction aided balance. While holding the hand of a partner, participants were able to walk further on the beam without falling, reduce lateral sway, and decrease angular momentum in the frontal plane. We measured small hand force magnitudes (mean of 2.2 N laterally and 3.4 N vertically) that created opposing torque components about the beam axis and calculated the interactionmore »torque, the overlapping opposing torque that does not contribute to motion of the beam-walker’s body. We found higher interaction torque magnitudes during partnered beam-walking vs . partnered overground walking, and correlation between interaction torque magnitude and reductions in lateral sway. To gain insight into feasible controller designs to emulate human-human physical interactions for aiding walking balance, we modeled the relationship between each torque component and motion of the beam-walker’s body as a mass-spring-damper system. Our model results show opposite types of mechanical elements (active vs . passive) for the two torque components. Our results demonstrate that hand interactions aid balance during partnered beam-walking by creating opposing torques that primarily serve haptic communication, and our model of the torques suggest control parameters for implementing human-human balance aid in human-robot interactions.« less
  4. null (Ed.)
    Localizing and tracking the pose of robotic grippers are necessary skills for manipulation tasks. However, the manipulators with imprecise kinematic models (e.g. low-cost arms) or manipulators with unknown world coordinates (e.g. poor camera-arm calibration) cannot locate the gripper with respect to the world. In these circumstances, we can leverage tactile feedback between the gripper and the environment. In this paper, we present learnable Bayes filter models that can localize robotic grippers using tactile feedback. We propose a novel observation model that conditions the tactile feedback on visual maps of the environment along with a motion model to recursively estimate the gripper's location. Our models are trained in simulation with self-supervision and transferred to the real world. Our method is evaluated on a tabletop localization task in which the gripper interacts with objects. We report results in simulation and on a real robot, generalizing over different sizes, shapes, and configurations of the objects.
  5. Rehabilitation of human motor function is an issue of growing significance, and human-interactive robots offer promising potential to meet the need. For the lower extremity, however, robot-aided therapy has proven challenging. To inform effective approaches to robotic gait therapy, it is important to better understand unimpaired locomotor control: its sensitivity to different mechanical contexts and its response to perturbations. The present study evaluated the behavior of 14 healthy subjects who walked on a motorized treadmill and overground while wearing an exoskeletal ankle robot. Their response to a periodic series of ankle plantar flexion torque pulses, delivered at periods different from, but sufficiently close to, their preferred stride cadence, was assessed to determine whether gait entrainment occurred, how it differed across conditions, and if the adapted motor behavior persisted after perturbation. Certain aspects of locomotor control were exquisitely sensitive to walking context, while others were not. Gaits entrained more often and more rapidly during overground walking, yet, in all cases, entrained gaits synchronized the torque pulses with ankle push-off, where they provided assistance with propulsion. Furthermore, subjects entrained to perturbation periods that required an adaption toward slower cadence, even though the pulses acted to accelerate gait, indicating a neural adaptation ofmore »locomotor control. Lastly, during 15 post-perturbation strides, the entrained gait period was observed to persist more frequently during overground walking. This persistence was correlated with the number of strides walked at the entrained gait period (i.e., longer exposure), which also indicated a neural adaptation. NEW & NOTEWORTHY We show that the response of human locomotion to physical interaction differs between treadmill and overground walking. Subjects entrained to a periodic series of ankle plantar flexion torque pulses that shifted their gait cadence, synchronizing ankle push-off with the pulses (so that they assisted propulsion) even when gait cadence slowed. Entrainment was faster overground and, on removal of torque pulses, the entrained gait period persisted more prominently overground, indicating a neural adaptation of locomotor control.« less