skip to main content


Search for: All records

Award ID contains: 1763469

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. For mobile robots, mobile manipulators, and autonomous vehicles to safely navigate around populous places such as streets and warehouses, human observers must be able to understand their navigation intent. One way to enable such understanding is by visualizing this intent through projections onto the surrounding environment. But despite the demonstrated effectiveness of such projections, no open codebase with an integrated hardware setup exists. In this work, we detail the empirical evidence for the effectiveness of such directional projections, and share a robot-agnostic implementation of such projections, coded in C++ using the widely-used Robot Operating System (ROS) and rviz. Additionally, we demonstrate a hardware configuration for deploying this software, using a Fetch robot, and briefly summarize a full-scale user study that motivates this configuration. The code, configuration files (roslaunch and rviz files), and documentation are freely available on GitHub at https://github.com/umhan35/arrow_projection. 
    more » « less
  2. Human–robot collaboration is becoming increasingly common in factories around the world; accordingly, we need to improve the interaction experiences between humans and robots working in these spaces. In this article, we report on a user study that investigated methods for providing information to a person about a robot’s intent to move when working together in a shared workspace through signals provided by the robot. In this case, the workspace was the surface of a tabletop. Our study tested the effectiveness of three motion-based and three light-based intent signals as well as the overall level of comfort participants felt while working with the robot to sort colored blocks on the tabletop. Although not significant, our findings suggest that the light signal located closest to the workspace—an LED bracelet located closest to the robot’s end effector—was the most noticeable and least confusing to participants. These findings can be leveraged to support human–robot collaborations in shared spaces. 
    more » « less
  3. Assistive robot manipulators must be able to autonomously pick and place a wide range of novel objects to be truly useful. However, current assistive robots lack this capability. Additionally, assistive systems need to have an interface that is easy to learn, to use, and to understand. This paper takes a step forward in this direction. We present a robot system comprised of a robotic arm and a mobility scooter that provides both pick-and-drop and pick-and-place functionality for open world environments without modeling the objects or environment. The system uses a laser pointer to directly select an object in the world, with feedback to the user via projecting an interface into the world. Our evaluation over several experimental scenarios shows a significant improvement in both runtime and grasp success rate relative to a baseline from the literature, and furthermore demonstrates accurate pick and place capabilities for tabletop scenarios. 
    more » « less