skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Precise and Effective Robotic Tool Change Strategy Using Visual Servoing With RGB-D Camera
In modern industrial manufacturing processes, robotic manipulators are routinely used in the assembly, packaging, and material handling operations. During production, changing end-of-arm tooling is frequently necessary for process flexibility and reuse of robotic resources. In conventional operation, a tool changer is sometimes employed to load and unload end-effectors, however, the robot must be manually taught to locate the tool changers by operators via a teach pendant. During tool change teaching, the operator takes considerable effort and time to align the master and tool side of the coupler by adjusting the motion speed of the robotic arm and observing the alignment from different viewpoints. In this paper, a custom robotic system, the NeXus, was programmed to locate and change tools automatically via an RGB-D camera. The NeXus was configured as a multi-robot system for multiple tasks including assembly, bonding, and 3D printing of sensor arrays, solar cells, and microrobot prototypes. Thus, different tools are employed by an industrial robotic arm to position grippers, printers, and other types of end-effectors in the workspace. To improve the precision and cycle-time of the robotic tool change, we mounted an eye-in-hand RGB-D camera and employed visual servoing to automate the tool change process. We then compared the teaching time of the tool location using this system and compared the cycle time with those of 6 human operators in the manual mode. We concluded that the tool location time in automated mode, on average, more than two times lower than the expert human operators.  more » « less
Award ID(s):
1828355
PAR ID:
10310575
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
45th Mechanisms and Robotics Conference (MR)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Industrial robots, as mature and high-efficient equipment, have been applied to various fields, such as vehicle manufacturing, product packaging, painting, welding, and medical surgery. Most industrial robots are only operating in their own workspace, in other words, they are floor-mounted at the fixed locations. Just some industrial robots are wall-mounted on one linear rail based on the applications. Sometimes, industrial robots are ceiling-mounted on an X-Y gantry to perform upside-down manipulation tasks. The main objective of this paper is to describe the NeXus, a custom robotic system that has been designed for precision microsystem integration tasks with such a gantry. The system tasks include assembly, bonding, and 3D printing of sensor arrays, solar cells, and microrobotic prototypes. The NeXus consists of a custom designed frame, providing structural rigidity, a large overhead X-Y gantry carrying a 6 degrees of freedom industrial robot, and several other precision positioners and processes. We focus here on the design and precision evaluation of the overhead ceiling-mounted industrial robot of NeXus and its supporting frame. We first simulated the behavior of the frame using Finite Element Analysis (FEA), then experimentally evaluated the pose repeatability of the robot end-effector using three different types of sensors. Results verify that the performance objectives of the design are achieved. 
    more » « less
  2. Ishigami, G; Yoshida, K (Ed.)
    The ability to build structures with autonomous robots using only found, minimally processed stones would be immensely useful, especially in remote areas. Assembly planning for dry-stacked structures, however, is difficult since both the state and action spaces are continuous, and stability is strongly affected by complex friction and contact constraints. We propose a planning algorithm for such assemblies that uses a physics simulator to find a small set of feasible poses and then evaluates them using a hierarchical filter. We carefully designed the heuristics for the filters to match our goal of building stable, free-standing walls. These plans are then executed open-loop with a robotic arm equipped with a wrist RGB-D camera. Experimental results show that the proposed planning algorithm can significantly improve the state of the art in robotic dry stacking. 
    more » « less
  3. We build a system that enables any human to control a robot hand and arm, simply by demonstrating motions with their own hand. The robot observes the human operator via a single RGB camera and imitates their actions in real-time. Human hands and robot hands differ in shape, size, and joint structure, and performing this translation from a single uncalibrated camera is a highly underconstrained problem. Moreover, the retargeted trajectories must effectively execute tasks on a physical robot, which requires them to be temporally smooth and free of self-collisions. Our key insight is that while paired human-robot correspondence data is expensive to collect, the internet contains a massive corpus of rich and diverse human hand videos. We leverage this data to train a system that understands human hands and retargets a human video stream into a robot hand-arm trajectory that is smooth, swift, safe, and semantically similar to the guiding demonstration. We demonstrate that it enables previously untrained people to teleoperate a robot on various dexterous manipulation tasks. Our low-cost, glove-free, marker-free remote teleoperation system makes robot teaching more accessible and we hope that it can aid robots that learn to act autonomously in the real world. 
    more » « less
  4. We investigate how robotic camera systems can offer new capabilities to computer-supported cooperative work through the design, development, and evaluation of a prototype system called Periscope. With Periscope, a local worker completes manipulation tasks with guidance from a remote helper who observes the workspace through a camera mounted on a semi-autonomous robotic arm that is co-located with the worker. Our key insight is that the helper, the worker, and the robot should all share responsibility of the camera view-an approach we call shared camera control. Using this approach, we present a set of modes that distribute the control of the camera between the human collaborators and the autonomous robot depending on task needs. We demonstrate the system's utility and the promise of shared camera control through a preliminary study where 12 dyads collaboratively worked on assembly tasks. Finally, we discuss design and research implications of our work for future robotic camera systems that facilitate remote collaboration. 
    more » « less
  5. null (Ed.)
    Localizing and tracking the pose of robotic grippers are necessary skills for manipulation tasks. However, the manipulators with imprecise kinematic models (e.g. low-cost arms) or manipulators with unknown world coordinates (e.g. poor camera-arm calibration) cannot locate the gripper with respect to the world. In these circumstances, we can leverage tactile feedback between the gripper and the environment. In this paper, we present learnable Bayes filter models that can localize robotic grippers using tactile feedback. We propose a novel observation model that conditions the tactile feedback on visual maps of the environment along with a motion model to recursively estimate the gripper's location. Our models are trained in simulation with self-supervision and transferred to the real world. Our method is evaluated on a tabletop localization task in which the gripper interacts with objects. We report results in simulation and on a real robot, generalizing over different sizes, shapes, and configurations of the objects. 
    more » « less