- Award ID(s):
- 1843892
- NSF-PAR ID:
- 10207679
- Date Published:
- Journal Name:
- Journal of mechanisms and robotics
- Volume:
- 12
- Issue:
- 4
- ISSN:
- 1942-4302
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Ferretti, Gianni (Ed.)Many anticipated physical human-robot interaction (pHRI) applications in the near future are overground tasks such as walking assistance. For investigating the biomechanics of human movement during pHRI, this work presents Ophrie, a novel interactive robot dedicated for physical interaction tasks with a human in overground settings. Unique design requirements for pHRI were considered in implementing the one-arm mobile robot, such as the low output impedance and the ability to apply small interaction forces. The robot can measure the human arm stiffness, an important physical quantity that can reveal human biomechanics during overground pHRI, while the human walks alongside the robot. This robot is anticipated to enable novel pHRI experiments and advance our understanding of intuitive and effective overground pHRI.more » « less
-
Abstract Humans can physically interact with other humans adeptly. Some overground interaction tasks, such as guiding a partner across a room, occur without visual and verbal communication, which suggests that the information exchanges occur through sensing movements and forces. To understand the process of motor communication during overground physical interaction, we hypothesized that humans modulate the mechanical properties of their arms for increased awareness and sensitivity to ongoing interaction. For this, we used an overground interactive robot to guide a human partner across one of three randomly chosen paths while occasionally providing force perturbations to measure the arm stiffness. We observed that the arm stiffness was lower at instants when the robot’s upcoming trajectory was unknown compared to instants when it was predicable - the first evidence of arm stiffness modulation for better motor communication during overground physical interaction.more » « less
-
Abstract In this paper, an optimization-based dynamic modeling method is used for human-robot lifting motion prediction. The three-dimensional (3D) human arm model has 13 degrees of freedom (DOFs) and the 3D robotic arm (Sawyer robotic arm) has 10 DOFs. The human arm and robotic arm are built in Denavit-Hartenberg (DH) representation. In addition, the 3D box is modeled as a floating-base rigid body with 6 global DOFs. The interactions between human arm and box, and robot and box are modeled as a set of grasping forces which are treated as unknowns (design variables) in the optimization formulation. The inverse dynamic optimization is used to simulate the lifting motion where the summation of joint torque squares of human arm is minimized subjected to physical and task constraints. The design variables are control points of cubic B-splines of joint angle profiles of the human arm, robotic arm, and box, and the box grasping forces at each time point. A numerical example is simulated for huma-robot lifting with a 10 Kg box. The human and robotic arms’ joint angle, joint torque, and grasping force profiles are reported. These optimal outputs can be used as references to control the human-robot collaborative lifting task.
-
Hideki Aoyama ; Keiich Shirase (Ed.)An integral part of information-centric smart manufacturing is the adaptation of industrial robots to complement human workers in a collaborative manner. While advancement in sensing has enabled real-time monitoring of workspace, understanding the semantic information in the workspace, such as parts and tools, remains a challenge for seamless robot integration. The resulting lack of adaptivity to perform in a dynamic workspace have limited robots to tasks with pre-defined actions. In this paper, a machine learning-based robotic object detection and grasping method is developed to improve the adaptivity of robots. Specifically, object detection based on the concept of single-shot detection (SSD) and convolutional neural network (CNN) is investigated to recognize and localize objects in the workspace. Subsequently, the extracted information from object detection, such as the type, position, and orientation of the object, is fed into a multi-layer perceptron (MLP) to generate the desired joint angles of robotic arm for proper object grasping and handover to the human worker. Network training is guided by forward kinematics of the robotic arm in a self-supervised manner to mitigate issues such as singularity in computation. The effectiveness of the developed method is validated on an eDo robotic arm in a human-robot collaborative assembly case study.more » « less
-
null (Ed.)Localizing and tracking the pose of robotic grippers are necessary skills for manipulation tasks. However, the manipulators with imprecise kinematic models (e.g. low-cost arms) or manipulators with unknown world coordinates (e.g. poor camera-arm calibration) cannot locate the gripper with respect to the world. In these circumstances, we can leverage tactile feedback between the gripper and the environment. In this paper, we present learnable Bayes filter models that can localize robotic grippers using tactile feedback. We propose a novel observation model that conditions the tactile feedback on visual maps of the environment along with a motion model to recursively estimate the gripper's location. Our models are trained in simulation with self-supervision and transferred to the real world. Our method is evaluated on a tabletop localization task in which the gripper interacts with objects. We report results in simulation and on a real robot, generalizing over different sizes, shapes, and configurations of the objects.more » « less