skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Telepresence Robot with Autonomous Navigation and Virtual Reality: Demo Abstract
Telepresence technology enables users to be virtually present in another location at the same time through video streaming. This kind of user interaction is further enhanced through mobility by driving remotely to form what is called a Telepresence robot. These innovative machines connect individuals with restricted mobility and increase social interaction, collaboration and active participation. However, operating and navigating these robots by individuals who have little knowledge and map of the remote environment is challenging. Avoiding obstacles via the narrow camera view and manual remote operation is a cumbersome task. Moreover, the users lack the sense of immersion while they are busy maneuvering via the real-time video feed and, thereby, decreasing their capability to handle different tasks. This demo presents a simultaneous mapping and autonomous driving virtual reality robot. Leveraging the 2D Lidar sensor, we generate two dimensional occupancy grid maps via SLAM and provide assisted navigation in reducing the onerous task of avoiding obstacles. The attitude of the robotic head with a camera is remotely controlled via the virtual reality headset. Remote users will be able to gain a visceral understanding of the environment while teleoperating the robot.  more » « less
Award ID(s):
1637371
PAR ID:
10092493
Author(s) / Creator(s):
;
Date Published:
Journal Name:
SenSys '16 Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Virtual Reality (VR) telepresence platforms are being challenged to support live performances, sporting events, and conferences with thousands of users across seamless virtual worlds. Current systems have struggled to meet these demands which has led to high-profile performance events with groups of users isolated in parallel sessions. The core difference in scaling VR environments compared to classic 2D video content delivery comes from the dynamic peer-to-peer spatial dependence on communication. Users have many pair-wise interactions that grow and shrink as they explore spaces. In this paper, we discuss the challenges of VR scaling and present an architecture that supports hundreds of users with spatial audio and video in a single virtual environment. We leverage the property of \textit{spatial locality} with two key optimizations: (1) a Quality of Service (QoS) scheme to prioritize audio and video traffic based on users' locality, and (2) a resource manager that allocates client connections across multiple servers based on user proximity within the virtual world. Through real-world deployments and extensive evaluations under real and simulated environments, we demonstrate the scalability of our platform while showing improved QoS compared with existing approaches. 
    more » « less
  2. Extreme environments, such as search and rescue missions, defusing bombs, or exploring extraterrestrial planets, are unsafe environments for humans to be in. Robots enable humans to explore and interact in these environments through remote presence and teleoperation and virtual reality provides a medium to create immersive and easy-to-use teleoperation interfaces. However, current virtual reality interfaces are still very limited in their capabilities. In this work, we aim to advance robot teleoperation virtual reality interfaces by developing an environment reconstruction methodology capable of recognizing objects in a robot’s environment and rendering high fidelity models inside a virtual reality headset. We compare our proposed environment reconstruction method against traditional point cloud streaming by having operators plan waypoint trajectories to accomplish a pick-and-place task. Overall, our results show that users find our environment reconstruction method more usable and less cognitive work compared to raw point cloud streaming. 
    more » « less
  3. Abstract Augmented reality (AR) enhances the user’s perception of the real environment by superimposing virtual images generated by computers. These virtual images provide additional visual information that complements the real-world view. AR systems are rapidly gaining popularity in various manufacturing fields such as training, maintenance, assembly, and robot programming. In some AR applications, it is crucial for the invisible virtual environment to be precisely aligned with the physical environment to ensure that human users can accurately perceive the virtual augmentation in conjunction with their real surroundings. The process of achieving this accurate alignment is known as calibration. During some robotics applications using AR, we observed instances of misalignment in the visual representation within the designated workspace. This misalignment can potentially impact the accuracy of the robot’s operations during the task. Based on the previous research on AR-assisted robot programming systems, this work investigates the sources of misalignment errors and presents a simple and efficient calibration procedure to reduce the misalignment accuracy in general video see-through AR systems. To accurately superimpose virtual information onto the real environment, it is necessary to identify the sources and propagation of errors. In this work, we outline the linear transformation and projection of each point from the virtual world space to the virtual screen coordinates. An offline calibration method is introduced to determine the offset matrix from the head-mounted display (HMD) to the camera, and experiments are conducted to validate the improvement achieved through the calibration process. 
    more » « less
  4. In many complex tasks, a remote expert may need to assist a local user or to guide his or her actions in the local user's environment. Existing solutions also allow multiple users to collaborate remotely using high-end Augmented Reality (AR) and Virtual Reality (VR) head-mounted displays (HMD). In this paper, we propose a portable remote collaboration approach, with the integration of AR and VR devices, both running on mobile platforms, to tackle the challenges of existing approaches. The AR mobile platform processes the live video and measures the 3D geometry of the local environment of a local user. The 3D scene is then transited and rendered in the remote side on a mobile VR device, along with a simple and effective user interface, which allows a remote expert to easily manipulate the 3D scene on the VR platform and to guide the local user to complete tasks in the local environment. 
    more » « less
  5. Teleoperation enables complex robot platforms to perform tasks beyond the scope of the current state-of-the-art robot autonomy by imparting human intelligence and critical thinking to these operations. For seamless control of robot platforms, it is essential to facilitate optimal situational awareness of the workspace for the operator through active telepresence cameras. However, the control of these active telepresence cameras adds an additional degree of complexity to the task of teleoperation. In this paper we present our results from the user study that investigates: (1) how the teleoperator learns or adapts to performing the tasks via active cameras modeled after camera placements on the TRINA humanoid robot; (2) the perception-action coupling operators implement to control active telepresence cameras, and (3) the camera preferences for performing the tasks. These findings from the human motion analysis and post-study survey will help us determine desired design features for robot teleoperation interfaces and assistive autonomy. 
    more » « less