skip to main content

Title: Virtual Telepresence for the Future of ROV Teleoperations: Opportunities and Challenges

Underwater robots, including Remote Operating Vehicles (ROV) and Autonomous Underwater Vehicles (AUV), are currently used to support underwater missions that are either impossible or too risky to be performed by manned systems. In recent years the academia and robotic industry have paved paths for tackling technical challenges for ROV/AUV operations. The level of intelligence of ROV/AUV has increased dramatically because of the recent advances in low-power-consumption embedded computing devices and machine intelligence (e.g., AI). Nonetheless, operating precisely underwater is still extremely challenging to minimize human intervention due to the inherent challenges and uncertainties associated with the underwater environments. Proximity operations, especially those requiring precise manipulation, are still carried out by ROV systems that are fully controlled by a human pilot. A workplace-ready and worker-friendly ROV interface that properly simplifies operator control and increases remote operation confidence is the central challenge for the wide adaptation of ROVs.

This paper examines the recent advances of virtual telepresence technologies as a solution for lowering the barriers to the human-in-the-loop ROV teleoperation. Virtual telepresence refers to Virtual Reality (VR) related technologies that help a user to feel that they were in a hazardous situation without being present at the actual location. We present a more » pilot system of using a VR-based sensory simulator to convert ROV sensor data into human-perceivable sensations (e.g., haptics). Building on a cloud server for real-time rendering in VR, a less trained operator could possibly operate a remote ROV thousand miles away without losing the minimum situational awareness. The system is expected to enable an intensive human engagement on ROV teleoperation, augmenting abilities for maneuvering and navigating ROV in unknown and less explored subsea regions and works. This paper also discusses the opportunities and challenges of this technology for ad hoc training, workforce preparation, and safety in the future maritime industry. We expect that lessons learned from our work can help democratize human presence in future subsea engineering works, by accommodating human needs and limitations to lower the entrance barrier.

« less
; ; ; ; ; ; ; ; ;
Award ID(s):
2128895 2128924
Publication Date:
Journal Name:
The SNAME 27th Offshore Symposium
Sponsoring Org:
National Science Foundation
More Like this
  1. In remote applications that mandate human supervision, shared control can prove vital by establishing a harmonious balance between the high-level cognition of a user and the low-level autonomy of a robot. Though in practice, achieving this balance is a challenging endeavor that largely depends on whether the operator effectively interprets the underlying shared control. Inspired by recent works on using immersive technologies to expose the internal shared control, we develop a virtual reality system to visually guide human-in-the-loop manipulation. Our implementation of shared control teleoperation employs end effector manipulability polytopes, which are geometrical constructs that embed joint limit and environmental constraints. These constructs capture a holistic view of the constrained manipulator’s motion and can thus be visually represented as feedback for users on their operable space of movement. To assess the efficacy of our proposed approach, we consider a teleoperation task where users manipulate a screwdriver attached to a robotic arm’s end effector. A pilot study with prospective operators is first conducted to discern which graphical cues and virtual reality setup are most preferable. Feedback from this study informs the final design of our virtual reality system, which is subsequently evaluated in the actual screwdriver teleoperation experiment. Our experimental findingsmore »support the utility of using polytopes for shared control teleoperation, but hint at the need for longer-term studies to garner their full benefits as virtual guides.« less
  2. The deep chlorophyll maximum (DCM) layer is an ecologically important feature of the open ocean. The DCM cannot be observed using aerial or satellite remote sensing; thus, in situ observations are essential. Further, understanding the responses of microbes to the environmental processes driving their metabolism and interactions requires observing in a reference frame that moves with a plankton population drifting in ocean currents, i.e., Lagrangian. Here, we report the development and application of a system of coordinated robots for studying planktonic biological communities drifting within the ocean. The presented Lagrangian system uses three coordinated autonomous robotic platforms. The focal platform consists of an autonomous underwater vehicle (AUV) fitted with a robotic water sampler. This platform localizes and drifts within a DCM community, periodically acquiring samples while continuously monitoring the local environment. The second platform is an AUV equipped with environmental sensing and acoustic tracking capabilities. This platform characterizes environmental conditions by tracking the focal platform and vertically profiling in its vicinity. The third platform is an autonomous surface vehicle equipped with satellite communications and subsea acoustic tracking capabilities. While also acoustically tracking the focal platform, this vehicle serves as a communication relay that connects the subsea robot to human operators,more »thereby providing situational awareness and enabling intervention if needed. Deployed in the North Pacific Ocean within the core of a cyclonic eddy, this coordinated system autonomously captured fundamental characteristics of the in situ DCM microbial community in a manner not possible previously.

    « less
  3. With the advancement of automation and robotic technologies, the teleoperation has been leveraged as a promising solution for human workers in a hazardous construction work environment. Since human operators and construction sites are separated in a distance, teleoperation requires a seamless human-machine interface, an intermediate medium, to communicate between humans and machines in construction sites. Several types of teleoperation interfaces including conventional joysticks, haptic devices, graphic user interfaces, auditory interfaces, and tactile interfaces have been developed to control and command construction robotics remotely. The ultimate goal of human-machine interfaces for remote operations is to make intuitive sensory channels that can provide and receive enough information while reducing the associated cognitive and physical load on human operators. Previously developed interfaces have tried to achieve such goals, but each interface still has challenges that should be assessed for enhancing the future teleoperation application in construction workplaces. This paper examines different human-machine interfaces for excavator teleoperation in terms of its on-site usability and intuitiveness. The capabilities of the interfaces for excavator teleoperation are evaluated based on their limitations and requirements. The outcome is expected to provide better understanding of teleoperation interfaces for excavators and guiding future directions for addressing underlying challenges.
  4. Advancements in Artificial Intelligence (AI), Information Technology, Augmented Reality (AR) and Virtual Reality (VR), and Robotic Automation is transforming jobs in the Architecture, Engineering and Construction (AEC) industries. However, it is also expected that these technologies will lead to job displacement, alter skill profiles for existing jobs, and change how people work. Therefore, preparing the workforce for an economy defined by these technologies is imperative. This ongoing research focuses on developing an immersive learning training curriculum to prepare the future workforce of the building industry. In this paper we are demonstrating a prototype of a mobile AR application to deliver lessons for training in robotic automation for construction industry workers. The application allows a user to interact with a virtual robot manipulator to learn its basic operations. The goal is to evaluate the effectiveness of the AR application by gauging participants' performance using pre and post surveys.
  5. In this report, we present the system design, operational strategy, and results of coordinated multivehicle field demonstrations of autonomous marine robotic technologies in search-for-life missions within the Pacific shelf margin of Costa Rica and the Santorini-Kolumbo caldera complex, which serve as analogs to environments that may exist in oceans beyond Earth. This report focuses on the automation of remotely operated vehicle (ROV) manipulator operations for targeted biological sample-collection-and-return from the seafloor. In the context of future extraterrestrial exploration missions to ocean worlds, an ROV is an analog to a planetary lander, which must be capable of high-level autonomy. Our field trials involve two underwater vehicles, the SuBastian ROV and the Nereid Under Ice (NUI) hybrid ROV for mixed initiative (i.e., teleoperated or autonomous) missions, both equipped seven-degrees-of-freedom hydraulic manipulators. We describe an adaptable, hardware-independent computer vision architecture that enables high-level automated manipulation. The vision system provides a three-dimensional understanding of the workspace to inform manipulator motion planning in complex unstructured environments. We demonstrate the effectiveness of the vision system and control framework through field trials in increasingly challenging environments, including the automated collection and return of biological samples from within the active undersea volcano Kolumbo. Based on our experiences inmore »the field, we discuss the performance of our system and identify promising directions for future research.« less