skip to main content


This content will become publicly available on September 28, 2024

Title: Periscope: A Robotic Camera System to Support Remote Physical Collaboration

We investigate how robotic camera systems can offer new capabilities to computer-supported cooperative work through the design, development, and evaluation of a prototype system called Periscope. With Periscope, a local worker completes manipulation tasks with guidance from a remote helper who observes the workspace through a camera mounted on a semi-autonomous robotic arm that is co-located with the worker. Our key insight is that the helper, the worker, and the robot should all share responsibility of the camera view-an approach we call shared camera control. Using this approach, we present a set of modes that distribute the control of the camera between the human collaborators and the autonomous robot depending on task needs. We demonstrate the system's utility and the promise of shared camera control through a preliminary study where 12 dyads collaboratively worked on assembly tasks. Finally, we discuss design and research implications of our work for future robotic camera systems that facilitate remote collaboration.

 
more » « less
Award ID(s):
1830242
NSF-PAR ID:
10476408
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
ACM CSCW
Date Published:
Journal Name:
Proceedings of the ACM on Human-Computer Interaction
Volume:
7
Issue:
CSCW2
ISSN:
2573-0142
Page Range / eLocation ID:
1 to 39
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we introduce a novel method to support remote telemanipulation tasks in complex environments by providing operators with an enhanced view of the task environment. Our method features a novel viewpoint adjustment algorithm designed to automatically mitigate occlusions caused by workspace geometry, supports visual exploration to provide operators with situation awareness in the remote environment, and mediates context-specific visual challenges by making viewpoint adjustments based on sparse input from the user. Our method builds on the dynamic camera telemanipulation viewing paradigm, where a user controls a manipulation robot, and a camera-in-hand robot alongside the manipulation robot servos to provide a sufficient view of the remote environment. We discuss the real-time motion optimization formulation used to arbitrate the various objectives in our shared-control-based method, particularly highlighting how our occlusion avoidance and viewpoint adaptation approaches fit within this framework. We present results from an empirical evaluation of our proposed occlusion avoidance approach as well as a user study that compares our telemanipulation shared-control method against alternative telemanipulation approaches. We discuss the implications of our work for future shared-control research and robotics applications. 
    more » « less
  2. Ishigami G., Yoshida K. (Ed.)
    This paper develops an autonomous tethered aerial visual assistant for robot operations in unstructured or confined environments. Robotic tele-operation in remote environments is difficult due to the lack of sufficient situational awareness, mostly caused by stationary and limited field-of-view and lack of depth perception from the robot’s onboard camera. The emerging state of the practice is to use two robots, a primary and a secondary that acts as a visual assistant to overcome the perceptual limitations of the onboard sensors by providing an external viewpoint. However, problems exist when using a tele-operated visual assistant: extra manpower, manually chosen suboptimal viewpoint, and extra teamwork demand between primary and secondary operators. In this work, we use an autonomous tethered aerial visual assistant to replace the secondary robot and operator, reducing the human-robot ratio from 2:2 to 1:2. This visual assistant is able to autonomously navigate through unstructured or confined spaces in a risk-aware manner, while continuously maintaining good viewpoint quality to increase the primary operator’s situational awareness. With the proposed co-robots team, tele-operation missions in nuclear operations, bomb squad, disaster robots, and other domains with novel tasks or highly occluded environments could benefit from reduced manpower and teamwork demand, along with improved visual assistance quality based on trustworthy risk-aware motion in cluttered environments. 
    more » « less
  3. Telepresence technology enables users to be virtually present in another location at the same time through video streaming. This kind of user interaction is further enhanced through mobility by driving remotely to form what is called a Telepresence robot. These innovative machines connect individuals with restricted mobility and increase social interaction, collaboration and active participation. However, operating and navigating these robots by individuals who have little knowledge and map of the remote environment is challenging. Avoiding obstacles via the narrow camera view and manual remote operation is a cumbersome task. Moreover, the users lack the sense of immersion while they are busy maneuvering via the real-time video feed and, thereby, decreasing their capability to handle different tasks. This demo presents a simultaneous mapping and autonomous driving virtual reality robot. Leveraging the 2D Lidar sensor, we generate two dimensional occupancy grid maps via SLAM and provide assisted navigation in reducing the onerous task of avoiding obstacles. The attitude of the robotic head with a camera is remotely controlled via the virtual reality headset. Remote users will be able to gain a visceral understanding of the environment while teleoperating the robot. 
    more » « less
  4. null (Ed.)
    Robotic systems typically follow a rigid approach to task execution, in which they perform the necessary steps in a specific order, but fail when having to cope with issues that arise during execution. We propose an approach that handles such cases through dialogue and human-robot collaboration. The proposed approach contributes a hierarchical control architecture that 1) autonomously detects and is cognizant of task execution failures, 2) initiates a dialogue with a human helper to obtain assistance, and 3) enables collaborative human-robot task execution through extended dialogue in order to 4) ensure robust execution of hierarchical tasks with complex constraints, such as sequential, non-ordering, and multiple paths of execution. The architecture ensures that the constraints are adhered to throughout the entire task execution, including during failures. The recovery of the architecture from issues during execution is validated by a human-robot team on a building task. 
    more » « less
  5. Underwater robots, including Remote Operating Vehicles (ROV) and Autonomous Underwater Vehicles (AUV), are currently used to support underwater missions that are either impossible or too risky to be performed by manned systems. In recent years the academia and robotic industry have paved paths for tackling technical challenges for ROV/AUV operations. The level of intelligence of ROV/AUV has increased dramatically because of the recent advances in low-power-consumption embedded computing devices and machine intelligence (e.g., AI). Nonetheless, operating precisely underwater is still extremely challenging to minimize human intervention due to the inherent challenges and uncertainties associated with the underwater environments. Proximity operations, especially those requiring precise manipulation, are still carried out by ROV systems that are fully controlled by a human pilot. A workplace-ready and worker-friendly ROV interface that properly simplifies operator control and increases remote operation confidence is the central challenge for the wide adaptation of ROVs.

    This paper examines the recent advances of virtual telepresence technologies as a solution for lowering the barriers to the human-in-the-loop ROV teleoperation. Virtual telepresence refers to Virtual Reality (VR) related technologies that help a user to feel that they were in a hazardous situation without being present at the actual location. We present a pilot system of using a VR-based sensory simulator to convert ROV sensor data into human-perceivable sensations (e.g., haptics). Building on a cloud server for real-time rendering in VR, a less trained operator could possibly operate a remote ROV thousand miles away without losing the minimum situational awareness. The system is expected to enable an intensive human engagement on ROV teleoperation, augmenting abilities for maneuvering and navigating ROV in unknown and less explored subsea regions and works. This paper also discusses the opportunities and challenges of this technology for ad hoc training, workforce preparation, and safety in the future maritime industry. We expect that lessons learned from our work can help democratize human presence in future subsea engineering works, by accommodating human needs and limitations to lower the entrance barrier.

     
    more » « less