skip to main content


Title: Precise and Effective Robotic Tool Change Strategy Using Visual Servoing With RGB-D Camera
In modern industrial manufacturing processes, robotic manipulators are routinely used in the assembly, packaging, and material handling operations. During production, changing end-of-arm tooling is frequently necessary for process flexibility and reuse of robotic resources. In conventional operation, a tool changer is sometimes employed to load and unload end-effectors, however, the robot must be manually taught to locate the tool changers by operators via a teach pendant. During tool change teaching, the operator takes considerable effort and time to align the master and tool side of the coupler by adjusting the motion speed of the robotic arm and observing the alignment from different viewpoints. In this paper, a custom robotic system, the NeXus, was programmed to locate and change tools automatically via an RGB-D camera. The NeXus was configured as a multi-robot system for multiple tasks including assembly, bonding, and 3D printing of sensor arrays, solar cells, and microrobot prototypes. Thus, different tools are employed by an industrial robotic arm to position grippers, printers, and other types of end-effectors in the workspace. To improve the precision and cycle-time of the robotic tool change, we mounted an eye-in-hand RGB-D camera and employed visual servoing to automate the tool change process. We then compared the teaching time of the tool location using this system and compared the cycle time with those of 6 human operators in the manual mode. We concluded that the tool location time in automated mode, on average, more than two times lower than the expert human operators.  more » « less
Award ID(s):
1828355
NSF-PAR ID:
10310575
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
45th Mechanisms and Robotics Conference (MR)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We build a system that enables any human to control a robot hand and arm, simply by demonstrating motions with their own hand. The robot observes the human operator via a single RGB camera and imitates their actions in real-time. Human hands and robot hands differ in shape, size, and joint structure, and performing this translation from a single uncalibrated camera is a highly underconstrained problem. Moreover, the retargeted trajectories must effectively execute tasks on a physical robot, which requires them to be temporally smooth and free of self-collisions. Our key insight is that while paired human-robot correspondence data is expensive to collect, the internet contains a massive corpus of rich and diverse human hand videos. We leverage this data to train a system that understands human hands and retargets a human video stream into a robot hand-arm trajectory that is smooth, swift, safe, and semantically similar to the guiding demonstration. We demonstrate that it enables previously untrained people to teleoperate a robot on various dexterous manipulation tasks. Our low-cost, glove-free, marker-free remote teleoperation system makes robot teaching more accessible and we hope that it can aid robots that learn to act autonomously in the real world. 
    more » « less
  2. Hideki Aoyama ; Keiich Shirase (Ed.)
    An integral part of information-centric smart manufacturing is the adaptation of industrial robots to complement human workers in a collaborative manner. While advancement in sensing has enabled real-time monitoring of workspace, understanding the semantic information in the workspace, such as parts and tools, remains a challenge for seamless robot integration. The resulting lack of adaptivity to perform in a dynamic workspace have limited robots to tasks with pre-defined actions. In this paper, a machine learning-based robotic object detection and grasping method is developed to improve the adaptivity of robots. Specifically, object detection based on the concept of single-shot detection (SSD) and convolutional neural network (CNN) is investigated to recognize and localize objects in the workspace. Subsequently, the extracted information from object detection, such as the type, position, and orientation of the object, is fed into a multi-layer perceptron (MLP) to generate the desired joint angles of robotic arm for proper object grasping and handover to the human worker. Network training is guided by forward kinematics of the robotic arm in a self-supervised manner to mitigate issues such as singularity in computation. The effectiveness of the developed method is validated on an eDo robotic arm in a human-robot collaborative assembly case study. 
    more » « less
  3. Industrial robots, as mature and high-efficient equipment, have been applied to various fields, such as vehicle manufacturing, product packaging, painting, welding, and medical surgery. Most industrial robots are only operating in their own workspace, in other words, they are floor-mounted at the fixed locations. Just some industrial robots are wall-mounted on one linear rail based on the applications. Sometimes, industrial robots are ceiling-mounted on an X-Y gantry to perform upside-down manipulation tasks. The main objective of this paper is to describe the NeXus, a custom robotic system that has been designed for precision microsystem integration tasks with such a gantry. The system tasks include assembly, bonding, and 3D printing of sensor arrays, solar cells, and microrobotic prototypes. The NeXus consists of a custom designed frame, providing structural rigidity, a large overhead X-Y gantry carrying a 6 degrees of freedom industrial robot, and several other precision positioners and processes. We focus here on the design and precision evaluation of the overhead ceiling-mounted industrial robot of NeXus and its supporting frame. We first simulated the behavior of the frame using Finite Element Analysis (FEA), then experimentally evaluated the pose repeatability of the robot end-effector using three different types of sensors. Results verify that the performance objectives of the design are achieved. 
    more » « less
  4. For a wearable robotic arm to autonomously assist a human, it has to be able to stabilize its end-effector in light of the human’s independent activities. This paper presents a method for stabilizing the end-effector in planar assembly and pick-and-place tasks. Ideally, given an accurate positioning of the end effector and the wearable robot attachment point, human disturbances could be compensated by using a simple feedback control strategy. Realistically, system delays in both sensing and actuation suggest a predictive approach. In this work, we characterize the actuators of a wearable robotic arm and estimate these delays using linear models. We then consider the motion of the human arm as an autoregressive process to predict the deviation in the robot’s base position at a time horizon equivalent to the estimated delay. Generating set points for the end-effector using this predictive model, we report reduced position errors of 19.4% (x) and 20.1% (y) compared to a feedback control strategy without prediction. 
    more » « less
  5. The objective of this research is to evaluate vision-based pose estimation methods for on-site construction robots. The prospect of human-robot collaborative work on construction sites introduces new workplace hazards that must be mitigated to ensure safety. Human workers working on tasks alongside construction robots must perceive the interaction to be safe to ensure team identification and trust. Detecting the robot pose in real-time is thus a key requirement in order to inform the workers and to enable autonomous operation. Vision-based (marker-less, marker-based) and sensor-based (IMU, UWB) are two of the main methods for estimating robot pose. The marker-based and sensor-based methods require some additional preinstalled sensors or markers, whereas the marker-less method only requires an on-site camera system, which is common on modern construction sites. In this research, we develop a marker-less pose estimation system, which is based on a convolutional neural network (CNN) human pose estimation algorithm: stacked hourglass networks. The system is trained with image data collected from a factory setup environment and labels of excavator pose. We use a KUKA robot arm with a bucket mounted on the end-effector to represent a robotic excavator in our experiment. We evaluate the marker-less method and compare the result with the robot’s ground truth pose. The preliminary results show that the marker-less method is capable of estimating the pose of the excavator based on a state-of-the-art human pose estimation algorithm. 
    more » « less